Hacker News new | past | comments | ask | show | jobs | submit login
Effect of perceptual load on performance within IDE in people with ADHD symptoms (springer.com)
215 points by xlii on July 14, 2023 | hide | past | favorite | 272 comments



Even "lite" editors like VSCode, NP++, Sublime Text, etc have become distracting for me. Every time I fire one of these things up, my mental state is eliminated by being forced to dismiss some "hey we got a new version for you :D:D:D" bullshit modal. To whomever is responsible for adding these to apps: please stop. No one is enjoying your shenanigans. It is a text editor. Empathize with the user: People with far less time than you are trying to paste some hot mess right off their clipboard to work with. They've probably got a super nasty idea in their head they can barely hold on to.

Visual studio proper is a 100% circus for me now. I can power through it, but I lose tabs in about 15 seconds after opening them. Many times I feel convinced these tools have been engineered with intent to be shittier over time and slow down neurodivergent users. The ADHD/etc crowd is definitely the #1 thing a software company needs to worry about in terms of moat maintenance...


> Visual studio proper is a 100% circus for me now. I can power through it, but I lose tabs in about 15 seconds after opening them.

I used to be the same way but then I discovered Visual Studio's vertical tab option. Tabs are displayed vertically on the left side of the editor pane. The filenames are all aligned so it's actually possible to visually scan them. You can group them by project and color code them by file type.

Tabs are unusable otherwise. For me at least.


My Google Fu is pretty good and I cannot find how to do this. I love vertical tabs in Firefox and never considered doing this in VSCode.

How!?

edit: on linux at least the process linked below doesn't work...

https://domysee.com/blogposts/vscode-vertical-tabs


I was talking about Visual Studio proper, not VSCode.


VSCode has something similar to vertical tabs called “Open Editors”


It does work for me on linux, v1.80.1. But it doesn't seem possible to put the sidebars next to each other: https://github.com/microsoft/vscode/issues/177812


Thank you. I just tried this out and it immediately felt like a dagger being removed from my back.

I can instantly see my new pattern is solution explorer on the right, vertical tabs on the left.

I can probably go back to one 1440p monitor again. I was abusing multiple monitors just to keep tabs pinned in physical space...


I use the tab tree extension in Firefox for the same reason. Vertical tabs have only minor usefulness anymore for me.


You don't need tabs if navigation between files is done properly. For example Bookmarks is massively stronger proposition than tabs.


There are browser extensions to do that with browser tabs. I use Sideberry.


Sublime Text is surprisingly bad about that. Or at least the package manager is. I'm okay with occasionally being prompted to update the editor, but it drives me crazy when I launch Sublime Text, go to start working in a text buffer, and then suddenly have the tab swap out from under me to show me changelogs for everything it just updated. All doing that does is piss me off and make me certainly not read the changelog because I'm instantly closing it to get back to what I was doing!


I've always hated IDEs. And I hate most UIs - they are a nightmare for me.

I never connected it to ADHD. But I have to say that vim is a godsend to me when I try to do anything serious. And if you are able to reach me whenever you want by Slack, I'm not being productive.


I'm a designer & product manager whose taste runs extremely strongly to minimalism, simplicity, and clarity in user interfaces. A late-in-life diagnosis of ADHD revealed why and linked it strongly (in my mind) to my ability to enter a hyper-focus mode when visual distractions are reduced.


I think it's time for you to roll up your sleeves and start modifying your editor, then. And if it's not modifiable enough, ditch it. I've been using IntelliJ for years (it's main thing is Java but it is also the best javascript/html editor, by far; the git UI is quirky but good once you get used to it). It's highly customizable such that you can banish anything you don't want to see. Features like "find anything" and "run anything" are great for less often used files and features, respectively. Their license is reasonable - it's annual, but if you stop paying you just stop at the last version you paid for. Also the free version is probably good enough for 90% of people. It's also the platform on which Android Studio is based - but I don't do Android so I can't say if that's good or not.

Oh and something I do when I really really need to focus is use another editor! Usually vim unless I'm on a mac in which case I'll use Sublime Text. But more and more I've been using IntelliJ's own "View modes" to get the effect I want.


As an IntelliJ user (and lover), it also gives you annoying update prompts basically every time I open it. There's always some plugin or another needing a reboot.

It has a lot of pros and cons compared to VSCode, but upgrade politeness isn't one of them. The frequent reindexing will often take you out of the flow too. It's very easy to move faster than IntelliJ can keep up with, especially on a slower computer.

And in terms of perceptual load, its interface is way more cluttered than VSCode, with multiple overlapping panels that each have several modes and tabs they can be in, and an unclear closing hierarchy that will often close some but not all of them (like shift-esc won't work reliably).

Some workarounds:

* Distraction-free mode reduces the noise: https://www.jetbrains.com/idea/guide/tips/distraction-free-m....

* Jetbrains is also working on a whole IDE rewrite to a cleaner UI: https://www.jetbrains.com/fleet/


>multiple overlapping panels that each have several modes and tabs they can be in, and an unclear closing hierarchy that will often close some but not all of them (like shift-esc won't work reliably)

Yes, this is the default. But it's pretty easy to change. Most of my windows are on hotkeys and mostly unpinned so they get out of the way. Some are pinned, so to close you hit the hot key again. For example, project view is ctrl 1 (or option 1 on mac) and I often close it - the highest source of not-useful visual clutter, imho. terminal is ctrl 2, which simulates some nice early linux utilties that would slide a terminal on and off the screen. The same set up for the run/debug tabs, database tabs, and so on. 90% of the time I'm just looking at the editor and a terminal. The way God intended.


It kinda depends which stack you work in, too. I think the JS/PHP/Web side of things tend to be a second-class citizen in IntelliJ especially (vs Java), but also in WebStorm. Things like npm scripts and Docker are first-class citizens in VScode but take multiple clicks to discover in IntelliJ, and even then it shares a panel with other functions, and some things end up in the "Run" tab while others end up in a separate npm panel while others launch their own sub-terminal... it's really easy to quickly lose track of them, sadly :(

It's not so much that I want an individual hotkey for each window, but a UI tailored for the 90% of my time (coding, dev server/docker status, npm scripts). The debugger is another big one that I wish had its own UI instead of just being mixed into the bottom pane with all the terminals from last week, etc.

I'm really hopeful that Fleet can drastically simplify all this, while still keeping the powerful indexing, diffing, and refactoring (the three main reasons I stick to Jetbrains instead of VScode)


>[npm, Docker] multiple clicks to discover in IntelliJ

I think this is where discernment matters. I tried both plugins, but ended up never using them. They didn't pull their weight. The terminal will always be the "canonical" way to interact with everything - but for Docker I like Desktop. I've always had good luck with it. And for npm, I'd much rather run it in a terminal. But gradle or maven? Happy to run in IntelliJ. I haven't really investigated why one feels so different than the others. But yeah it's annoying when e.g. Docker uses the Run tab. That seems wrong.


have you tried the "new ui"? it's super clean and completely different, very reminiscent of vscode.


Yeah, I use that daily. It's better but still subject to some of the same issues I mentioned in my post.


This is why I, and I suspect many others, like terminal based text editors like vim (or emacs, whatever. Our war is against GUI before each other). I find IDEs have too much going on and any little blip on the side shifts my focus. Whereas with vim/tmux/zsh I can highly and easily customize my environment to... __me__. About everything an IDE offers I also get[0], but with more ease in having it placed where I want and __when__ I want. I can have a project drawer visible or push it away with NerdTree or use netrw (native) or vimfiler as an explorer. I have tags, linters, smart autocomplete (native), color bracket matching, git, buffers, panes, marks, and all that. All that with trees and interfaces to view in code or quickly turn things off if they are distracting (e.g. pull up my tag tree when reading code or referencing a signature but shove it away when not).

But the best feature is that I can can make it most readable to me. Not only that, but also to the project. I am a true believer that coding environments __should__ be highly personalized. Standardization seems to be a death sentence, especially for ADHD/neurodivergent people like me. I definitely get the sentiment of intent to be shittier over time too. Systems like VSCode feel impenetrable to me despite numerous attempts and strong insistence to use them from many others. (I'm sure this can be true of vim/emacs to others but my argument is about customizing your environment to you, not the tool you use for that)

[0] realistically there's only two things I want that I haven't found: 1) a (good) debugger, and 2) a note system. There are definitely debuggers for vim and useful ones but I've always felt debuggers could be more useful and this isn't just a vim issue. Which, a connection to the second thing, I'd love if I could make notes to specific lines of code in a popup or split and that the note has a mark on that line wherein I can go back and forth. This is immensely helpful when debugging where I'm usually sitting with a piece of paper and drawing[1] and writing notes and often in that I notice optimization or other opportunities that I should come back to later but are not prioritized in the "make it work first" mode (or else rabbit hole). (Minor 3rd thing: in line python execution. Like I want to test a single line or small block. `python -i` can help but just doing this easily would be nice)

[1] Do people not like call graphs? It seems they're rather unpopular and the interfaces that draw them tend to be really bad. Maybe I just haven't found a good one? Mostly work with python fwiw.


> I'd love if I could make notes to specific lines of code in a popup or split and that the note has a mark on that line wherein I can go back and forth.

Emacs does this with a feature called "bookmarks". I don't use vim particularly so can't vouch for anything, but https://github.com/MattesGroeger/vim-bookmarks looks like the same concepts I'm thinking of in a vim-flavored implementation - maybe worth a look.


Wow that looks like what I want! Even found someone (nvim) doing with popups (https://github.com/winter-again/annotate.nvim). Thanks!


VS Code isn't actually that bad with its extension and application updates. When an extension has been updated, it'll try to load the new version in-memory (if the extension supports it), otherwise it'll add a badge to the Extensions panel icon in the sidebar, leaving a subtle reminder that you need to restart in order to get the new extension update. For updates to VS Code proper, it adds a badge to the settings icon. No popups or notifications. You can also disable auto-updating for extensions and the application if you find the badges distracting.

Now, if certain extensions try to get your attention with notifications or automatically opening release notes, that can absolutely be a problem. Extensions vary wildly in quality, since the barrier for publishing one is effectively nothing. These leaves the job of curating good, non-spammy extensions up to the developer.


VS Code:

* Opens a welcome tab anytime you open a project

* Pops up unsolicited toast notifications for new file extensions, "we have extensions which can help you with that file type!"

* Opens "what's new" type tabs anytime it updates

* Starts the "jumping up and down" MacOS dock animation for sometimes trivial problems

I could list more examples, but suffice it to say, this is...not ideal. I love VS Code, so for fresh/unsync'ed installs I'm willing to meticulously trawl through its settings, twiddling various knobs to disable these multifarious annoyances. But hey, it's worth it (to me).


I learned to smash escape button as soon as I see something spawning in the bottom-right corner of the screen

I haven't missed anything critical yet.

> Opens "what's new" type tabs anytime it updates

this happens once a month maybe and sometimes it's actually something useful. Realistically though, it's just another ⌘W. By the way, look at the `update.showReleaseNotes` setting

> Opens a welcome tab anytime you open a project

I don't have this behavior on my computer, it's just empty file or whatever was opened last time


My baseless posthoc justification:

  1. ADHD superengineers could be representing more than 0.0% but below 0% of all users.
  2. A lot of ADHD-specific optimizations, e.g. fast transitions without frustrating gooey animations, distraction-free UI without tips and emojis, could be detrimental to telemetry items that:
    a) do represent non-ADHD needs, and also;
    b) engineer performances are evaluated upon, even for Free Software these days.


All things being equal the UI that doesn’t require tips is the better UI.


The corollary is enabling auto-update to get rid of the box, only to find UI features added or rearranged every time you try to start working.


I'm sorry to shame here, but one of my favourite environments jupyterlab has started with the same thing. Update notification popup. And this is python, where I specificed exact version in my requirements file.


I love intellij/rider's Distraction Free mode, combined with shortcuts and no mouse


Hm, I've dealt with ADHD for a long time but Visual Studio proper never bothered me, even before I got any medication. If anything I greatly appreciate how much it does for me (various autocomplete functions etc) so I can focus my mental load on the problem, and these days I feel out of my element trying to write any code in a plain text editor.


For Sublime Text, you should be able to set this in your settings file:

    "update_check": false
In VSCode change the Update Mode setting to manual.


> in your settings file

Further into the distracted rabbit hole we go...


Not a text editor but Postman is the worst for that. Every time I open it there’s another damn update, another prompt, another “hey check out this feature” popup. I don’t care.

What I actually want is a gui version of curl. Just let me send my CRUD operation over HTTP. I want it to work and that is all.


Visual studio is such consistently the slowest program to open on my machine it's not even funny. I once tried to build an open source project which was tooled around VS and in the time it took to open, I successfully read the VS config in notepad and did the actions by hand.


Those aren't light editors! I use a bare bones emacs with all the UI turned off. Just a flashing cursor. It helps me focus on the content.


The one good thing about IDEs is their key bindings when they're like vim. Everything else sucks. I use vim with debuggers.

Nothing seems to have changed for the worse over decades.

It doesn't get in the way and seems to work the same everywhere.


> I lose tabs in about 15 seconds after opening them.

But even simple editors open multiple files somehow, in tabs or otherwise.


There is no earthly reason why a text editor should have network access in the first place. Can you just block that?


Built-in git, Github PRs & comments, documentation lookups, database browsers, copilot, opening preview browsers, networked Docker, remote coding, CI/CD pipelines... this is talking about IDEs after all, not just a basic text editor


For Sublime and VS Code, you can turn off automatic updates.


I still keep TextMate around for hitting flow on specific tasks for this reason.


As an engineer for 10+ years with pretty severe ADHD (unmedicated, I cannot usually even read more than a few pages of a book before either losing focus or feeling drained) I have always been aware of how much my performance fluctuates based on the cognitive load of dealing with the code I am currently working with, and how it is displayed. I half-jokingly tend to describe it to people as having a "small brain buffer" where while I can understand (and design and implement) very complex things, it is easy for me to flounder when debugging if I feel unable to visually see or mentally hold the entire problem in my head at once. This is especially true if the code isn't mine.

It is for this reason that I try to write what I feel is very obvious or self-explanatory code, why I try to keep functions/modules as simple as possible and ideally not longer in length than I can see on my screen at once (when possible), and why I almost exclusively join small/new teams who don't yet have an enormous codebase that I'll have to wrap my head around. I am never done tweaking the UIs of the editors I use to maximize my ability to work around these things.


> why I try to keep functions/modules as simple as possible and ideally not longer in length than I can see on my screen at once (when possible)

This is the paradox of "readable" code. Each person has a different definition of it. I also prefer to write as dumb code as possible (well, these days I do, I of course tried to be extra clever in my earlier days). But for me it's the jumping around that makes me lose focus. My limit is about 3-4 jumps. This is not to say I write long meandering functions, but I personally couldn't imagine a codebase I'd be happy working in that is made up of modules that fit on a screen. Maybe it's possible! But the projects I've worked on that have a linter rule of "100 lines per file" or whatever end up being these rabbit holes. It's so hard to come up with code design guidelines everyone can agree on.

As a side note, I despise things like imports and aliases. I'd prefer that when I do jump to a function, I can read it without having to check if anything is imported or not. I always opt for fully qualified function calls, regardless of how many characters it is.


I cannot stand programs that are excessively split up and have too many files. There is nothing worse, especially combined with tiny functions.

Not to mention the performance impact caused depending on the language. At this point I do not even care if it's true or not that the impact isnt negligible, if you are submitting something like that for me to review I'm probably going to demand you inline (not with the keyword, with copy paste) the majority of your functions and merge the files.

I think they are teaching this tiny function and tiny file method of programming in schools now or something. I get tons of submissions where there isn't a single function longer than 4 or 5 lines. I'm going to blame it on Java and OOP because thats when I started seeing it, probably due to people writing classes like that with trivial getters and setters rather than public fields.


When I was working in Ruby, I think the default function length was 5 lines. I used to be on board with this until I helped write a whole system adhering to this.

Small functions aren't bad per se, especially if they are re-used a lot, but I no longer believe that they are inherently good. Especially awful are having a lot of tiny single-use private functions.

I think that developers often want an easy way to say "we must do this" and have a tool tell them to do so. Tools can get us pretty far but at some point writing good code is going to come down to taste. This is of course very hard in many large project situations where its every team for themselves, and often every developer for themselves within a team.


IntelliJ recommends inlining single use functions.


> As a side note, I despise things like imports and aliases. I'd prefer that when I do jump to a function, I can read it without having to check if anything is imported or not.

One idea might be to use an LSP (Language Server Protocol) interface. It could describe the fully qualified symbol for you when you, say, select the abbreviated symbol or press a keyboard shortcut. I've been working on a moderately large C program with Emacs and clangd[1] recently and have been amazed at how 'immersive' it feels, and that's from someone who's used to the comfort of a Lisp REPL!

[1]: https://clangd.llvm.org/


I work in Elixir and there are LSPs available. I'd still rather be able to just read and take as little action as possible to have to figure out where something is coming from. But as a sibling commenter says, they hate long names, so it's impossible to please everyone and comes down to teams making agreements.


For me it's not just the jumps but also overly long names. My ability to quickly scan the code is hindered by names that all jumble together because they aren't visually distinct enough. And when they read like a book, I'm going to have the same problems I have when reading books where I'm somehow reading all these words across the page but at the same time not actually processing any of it.


I have a colleague that loves to call functions things like ‘convert_object_type_from_mm_to_m’ and ‘convert_object_type_from_m_to_mm’ . It’s unscannable. They’re the same to me.

He does not like my ‘meter_to_mm’ and ‘mm_to_meter’ because the abbreviations are inconsistent, no verb is used and he needs to go all the way to the typehint to know the input. We both have a point, but there is not really a clear middle ground without writing a novel in function names.


Ya, that is ridiculous, honestly. While it's still all subjective, you gotta draw the line somewhere! I'm a stickler for consistency myself and would be ok with `meter_to_millimeter` but the `convert_object_type_from_` is pure nonsense. Like, assuming you're talking about a language with objects, what the heck else would you be converting?? EDIT: requiring verbs in function names is also overrated. It's one of those things that's usually a good idea, but if comes down to employing some taste to decide when it might not be. EDIT 2: I'm now upset for you for having to put up with that, lol.


I’d personally go with a single function like “convert_object_type” and pattern match into different clauses such as “convert_object_type(obj, from: m, to: mm)” and so on.


I believe a good variable name should be a token. To me this means that it should be visually distinct at a glance without reading it. This runs counter to my job where 20+ character variables are common.

Token length names are really useful when working on a contained codebase. Exposed endpoints result in longer names and a messy spelling errors


What constitutes an overly long name for you? Are you talking Java-y things likes `UserAccountBuilderFactoryFactory` or even just something like `operation` instead of `op`?


For the `operation` vs `op` example, it's highly contextual. If it's used in a narrow scope or if it's repeated many times (like I've seen in functions that copy fields of one structure to another), `op` is preferable. Otherwise I'm fine with `operation`.

The Java naming is one cause of what I really have a problem with. Like that name on it's own isn't the worst because the key info is right at the start and I'm probably not going to see the type name in function bodies I'm trying to follow. It's when that naming is combined with other overly verbose naming that it gets really gets bad: `CreateUserFromUserAccountBuilderFactoryFactoryOnBackgroundThread()` By the time I've read "BackgroundThread" I'm glancing backwards to see what is happening on the background thread.

Other places where longer names make reading harder are when the names are very similar. With `textBoxXPosition` and `textBoxYPosition` the X and Y get lost too easily. You could rearrange the letters to the front to help with that, but it can still make computations harder to read: `sqrt(xTextBoxPosition * xTextBoxPosition + yTextBoxPosition * yTextBoxPosition)` vs `sqrt(x * x + y * y)` or `sqrt(tbx * tbx + tby * tby)`


I think we're mostly in agreement here, thanks for sharing!

I actually always write full variable names, even in narrow scopes. This is mostly because I'm used to programming in dynamic languages and it makes renaming much easier. I've also become so accustom to it that it makes scanning incredibly fast. For example:

    Enum.map(operations, fn operation ->
      execute(operation)
    end)
I can pretty much just read the left-hand side of that if I'm scanning really quickly, ie, ignore everything after `fn`. I'm talking micro-optimizations here, but I have been tripped up before when someone did something like `optn`.

Elixir allows for a shorthand notation for anonymous functions like so:

    Enum.map(operations, &execute(&1))
Some people hate that but again I like it due to the refactoring advantages (it becomes very simple to read once you get used to it).

I hear you on the x/y thing. You example is actually why I really prefer snake case, though I don't want to start a flame war there, haha. But ideally when you're dealing with points and dimensions you wrap them in a type or object that you can pass around and the receivers can extract simple `x` and `y` variables from them.


not OP but I think even just `operation` vs `op` should be self explanatory for the majority of devs. And documentation can dispel any doubt regarding the interpretation of the name.

Terse naming is just fine as long as the names are visually distinct from those in the same scope/namespace and as long as engineers/developers are religious about proper documentation.

i.e. members/classes/types/fn/etc getting a doxy/doc description and local variables getting a `//` comment if their meaning and use isn't easily derivable from the immediate context.

If that isn't happening however, you probably need to be pushing for more descriptive names or better docs during code review.


It's self-explanatory for sure, but I would prefer the actual domain term be Op as well. IE, I really dislike `op = Operation.new()` or whathaveyou. This breaks stuff like editor highlighting and whatnot. It's also harder to keep consistency through a codebase. Like maybe some people do `o = Operation.new()`. I find very high cohesion here helps my code-scanning a ton.

I'm not saying that would be true for anyone, just furthering my premise that it's different for everyone.


At my firm we’ve standardized on import being used only for a short list of well known first/third party modules (e.g. Ecto.Query and Plug.Conn).

Application modules can be aliased, but not library modules. This is so that we don’t need to keep repeating the name of the application throughout the codebase while preserving clarity as much as possible.


> It is for this reason that I try to write what I feel is very obvious or self-explanatory code [...]

I don't have ADHD, but, for different reasons, my brain seems to work in a similar manner. And, I have developed coping strategies similar to those applied with people on the ADHD spectrum [1][2].

My anxiety around coding has reduced a significantly when I learned about TDD and started writing code made of very small, composable chunks. I even use the same "small buffer" metaphor when talking about this!

I know that some people with ADHD use the writing tool I built for myself. If you have a moment, feel free to give it a shot and let me know what you think: https://enso.sonnet.io

(the web version is free, and has complete feature parity, no need to pay me)

- [1] https://sonnet.io/posts/hummingbirds/

- [2] https://sonnet.io/posts/sit/


> My anxiety around coding has reduced a significantly when I learned about TDD and started writing code made of very small, composable chunks.

you definitely do not have ADHD, then. I find these absolutely impossible to trace through when I want to find out what a program is actually doing, in reality.

if I have to find the actual logic used in main or whatever event handler, and it's in a function called by a function called by a function etc., then I will forever despise the developer who felt that this was anything approaching an acceptable design, or even an acceptable idea.

codebases like that are impossible for me to absorb.

put the logic where it fucking goes, inline with its use! right there! nowhere else! don't nest it behind abstractions you don't NEED.

functions that are very small and used in 100 places in the codebase should be inlined in the source code so I can read it. there is no function used this much in an application I've ever even considered working on, anyway.

I always prefer to copy and paste a little bit of logic around than to wrap 5 lines in a function and call it from everywhere.

if the function is 100 lines of complex code, put it in a function and call it ONLY if it is used in more than 2-3 places, and NEVER if it is only used once.

today's "best" practices serve only to shut me and others like me out of participation and are exclusionary.

this does mean that I have to take the time to find very good names for everything, which takes time, but always pays off in readability, for everyone, not just me.


> you definitely do not have ADHD, then

I do have ADHD (clinically diagnosed), and I disagree 100% with this statement (at least the certainty of it). In my experience, TDD and writing small chunks greatly reduces my anxiety and increases my productivity by helping me to narrow the scope of the task I'm working on.

Different people have very different experiences despite having the same diagnosis. What is exclusionary for one person may be the exact opposite for someone else.


>you definitely do not have ADHD, then. I find these absolutely impossible to trace through when I want to find out what a program is actually doing, in reality.

You cannot determine if people have adhd on whether they agree with you on big vs small functions. Its not how it works and all, and your preference is just a preference and not an universal truth for everyone with adhd.


This is why I like more bottom-up approaches to coding. Code broken up sorta arbitrarily, that's compressed moreso than abstracted, is hard to follow. But, when there's a reasonably small set of solid domain abstractions (even extremely complex ones in implementation), that's totally fine. This is where preferences vary; some folks have committed nuances of various helper libraries to memory, and find it easier to process a `fuse_zip` combinator or whatever than a loop, but for the readers who'd need to look that up, it's an attention land-mine.

API design, abstraction design, is a type of UX design, and like other forms of design, accessibility is a concern. Keeping the vocabulary fairly small and the abstractions well-behaved and consistent help nearly everyone, but particular ADHD folk.


The thing is that you shouldn't really care about the details of these small functions until you for some reason, need to. At a high level it's easier to understand stringing together a bunch of small functions and assume how they work. If you have ADHD it seems like abstracting away the details in the small functions should help you, but it seems like you aren't able to ignore the details of how each small function works even when in context it doesn't matter, which is where it hurts you.


Hehe, I agree with what you're saying here, but I think you might've misunderstood me.

> put the logic where it fucking goes, inline with its use! right there! nowhere else! don't nest it behind abstractions you don't NEED.

What you're talking about here looks more like over-abstracting. What I'm talking about is having understandable components with simple, easy to understand behaviour. I think we're on the same page.

I want to be able to understand what the component does at the first glance. I also want to be able to understand what a larger part of the system does at the first glance. But, overall, I'll always err on the side of _under-abstracting_ and duplication, like you.

One might argue that software development is about 1) defining a business problem, _then_ 2) communication (usually with "ghosts" i.e. another developer who is not in the room with you[1]), then writing the implementation. My philosophy on the subject is like this: https://sonnet.io/posts/code-sober-debug-drunk/

TL;DR it's always easier to write code than to read it, be kind to the next person reading it (it might be you)

[1] https://sonnet.io/posts/emotive-conjugation/#:~:text=Ghost%2...


Interestingly I do the same as you, partly because the single responsibility part of SOLID, but also because it just appeals to me to write clean easily understandable code. It’s for different reasons than you, however, my ADHD brain can “buffer” massive amounts of code… for a while. So I mainly write things simply because it’s much easier for me to “get back to” after 6 months, well, and because it’s clean code. I think it’s interesting that we end up with the same sort of code architecture though, despite not being affected by our ADHD the same.

For me the real struggle isn’t focus or “buffer”, it’s when things are boring. For almost all code work, even debugging, my hyperfocus grinds into gear, and if I’m left undisturbed I can quite literally work for 10 hours straight without eating. I don’t, because there is a bill to pay after doing so and I’ve worked on myself for decades to the point where I know to take breaks, eat and go home, but the hyperfocus is there.

Until things get boring. Luckily this is rare with code. Not so with everything related to the process of being allowed to write code. I try to work in places where the process people bother you as little as possible, and if I can avoid it, I’ll happily go through the rest of my career without ever pretending to listen in another standup meeting ever again.


> unmedicated, I cannot usually even read more than a few pages of a book before either losing focus or feeling drained

Tangent: ADHD is strange. I have it too, and yet when it comes to reading my symptoms appear to be the exact opposite (when I was younger I could not stop reading books until I finished them, even if I ended up skipping an entire night of sleep for it). I wouldn't be surprised if in the next decades, as we learn more about it, this diagnosis will split into multiple different comorbid disorders with overlapping symptoms.


Could that be a learned adaption to ADHD rather than a directly caused trait? Or just hyper focus?

I do have a pet theory that many of us ADHDers develop habits to overcompensate but do so early enough that we don't realise they're learned. For example my room growing up was always super tidy, I literally never had to be asked to clean it and I'd know straight away if someone had been in while I was out. That really confused me when I got diagnosed but in hindsight it's because without everything having a consistent place I'd constantly lose things and never find anything.


It was definitely not an adaption - I was intrinsically motivated to keep going and read where the story was going, or what exciting new thing I could learn (my parents gave me a children's encyclopedia for my tenth birthday that I would carry around and read everywhere). So hyper focus on the dopamine hit of learning something new, I guess.


What you describe is how I was when I was younger, too. I read everything, I read pretty fast, and I could read for hours on end. That is no longer the case at all, and I'm uncertain when or why it changed so drastically. (The "when" is tough to pinpoint because for much of my early 20s I wasn't doing much reading, and it's possible that the "why" is related to some change my brain underwent during/because of that.)


I am exactly the same way. Was a voracious & fast reader throughout schooling up until college age. Officially diagnosed ADHD in college when the material became hard enough for me to not "brute force through with my raw intelligence anymore" (psychiatrist words, not mine) so I was likely masking all the way up through high school. Younger brother was diagnosed at a more typical age which made my late diagnosis easier.

I wonder if it has to do with the mental burden we start to accumulate in transitioning to adulthood. Much easier to hyperfocus on books when we don't have the weight of finances, careers, complex relationships, etc.


I lost my reading tendency when life got harder. The urgency and anxiety around keeping up with everyday life made activities like reading almost unsettling slow and “unproductive”.

I realized that and did something about it, but I didn’t allow myself to enjoy a book for around 10 years. Not sure if that resonates with you (I hope not!) but I figured I’d mention it just in case. I was totally unaware of what I was doing at the time.


I was similar. Went through countless sci fi novels, Crichton’s complete works to the Admiral Thrawn series.

I found a hack into keeping up with my reading, which is to use a kindle with wireless page turner attached to a mechanical arm.

I’m able to fall asleep reading like I used to and slowly make my way through extended science fiction. The effort to read is very low, the last page always set.

I don’t often stay up way past bedtime from reading anymore, but it did happen this year for me reading Symphony of Secrets by Brendan Slocumb.


What did you do about it? I'm exclusively a vacation reader. When I have limited free time, reading competes with other hobbies or media and almost never wins just because a book is such a huge time investment compared to a movie or a video game and I'd really like to change that.


Can’t speak for the OP but in my case, reading is what I do to unwind before sleep. If I let myself work on hobbies or projects until I want to sleep I’ll either be up until 5am doing “one more thing” or won’t be able to sleep because my brain is still working. I need to stop all “thinking” activities at least an hour before I want to sleep. So I read in that space. The caveat to this (because I can also read until 5am if I’m engrossed) is the moment I notice I’m feeling sleepy or I’ve had to go back and re-read a page because I drifted mentally, then it’s time to shut the reader and go to sleep. If I try to power through to finish the chapter, I’m too likely to have caught a second wind and be up all night.


I use a kindle set to the lowest brightness setting and really make a point of reading before I sleep most nights.

I’ve drilled it into my head that if I’m not reading or focused on quality sleep, everything else will suffer. It’s been true for decades and I’ve been disappointed by finding it out again enough times that I generally believe it now.

No phone, no late night computer/work, just reading if I’m not asleep and otherwise sleep. It’s so important.

I don’t claim to be a particularly intelligent person by any means. This change has done wonders for being less stupid, however. Reading really does stimulate your brain in such positive ways. You probably won’t notice when you’ve stopped for a while and that’s a serious shame; we’d probably all read more if it was more obvious how much better we are when we do it. And of course the sleep hygiene plays a huge role too.

Occasionally I fall off the wagon for a week or two due to family stuff. To be honest, my wife hates that I read at night, and that’s friction as well. When your brain has a tendency to work against you though, you really do need to take action to build routines that keep you sane and on a good trajectory. My wife would hate it more if I allowed myself to sleep less and indulge in other things in the evening, so this is a lesser evil that keeps my cup full so to speak and helps me stay on track.

ADHD is all about the pit of success and elimination of problematic systems or temptations, in my experience. Make it hard to fail.

Edit: I should add that while it seems boring or impossible with limited time, the routine encourages better sleep (at least in my case) and as a result, better results of my efforts through the subsequent days and weeks and so on. I end up with more time to do stuff I like because I’m not making as many poor choices due to a lack of sleep, corresponding poorer diet, decreased productivity, etc.

I struggled with this part for most of my life and didn’t truly believe it would make a difference until it did. I’m a much better person when I let myself sleep. My brain yells at me not to most nights, but it I can cut through that I will have a better following day without fail. That effect compounds far better than I would have guessed before.

The whole basis of making this work is the commitment and ideally interest in cozying up with a book you want to read. Soon after, because you aren’t 13 anymore, you will pass out. If not, you gain the significant benefit of having not been staring at a screen, engaged in thought, eating, worrying, etc. Your sleep quality will increase even if you read extra some nights.


I keep a book near me while working.

I can't really 'break' on the computer because that's where the work is happening. But I can push down the laptop screen for a little while and read for 10 minutes, no problem. A physical book is a different modality, different everything, and sticky enough to keep my interest but not so sticky I'll lose an hour without noticing.

Do that 2 or 3 times during your workday and you'll be finishing books in no time.


I use a similar technique with ebooks on my phone.


What did you do about it? If you don't mind expanding on that.


Not at all, I responded to someone else about that here:

https://news.ycombinator.com/item?id=36729512


When did you start using the internet regularly? (It's possible that you didn't change, it's just that your brain found an easier dopamine delivery system).


Way, way earlier. I definitely had different dopamine delivery systems during those years when I wasn't reading a lot. I was doing a lot of things differently n general during that time. But it was surprising and disappointing that I never seemed to regain quite the same reading "abilities" (for lack of a better word) even after years of regular reading again.

Don't get me wrong, I read all the time now, but I can't burn through books like I did back then, I don't read faster than I can subvocalize, and it can be challenging to stay focused or retain what I read.

A few years ago, I spent about nine months reading a single book about Napoleon. It was pretty long, detailed, and complex, so it wasn't intended to be an easy/quick read, but it still took me a pretty long time to get through it while giving it the proper focus. But the annoying part is that a couple years later, I struggle to remember details about Napoleon. I know, memory needs to be practiced and reinforced and such, but I had hoped the length of my immersion in the subject would've done a bit more in this area on its own.


I managed to slowly claw it back by completely changing my reading method. I download ebooks on my phone and read in 5-10 min chunks throughout the day (where I’d normally be using social media or checking the news) followed by 1 hour at the end.


You get hyper-focus when you're interested in something. So both can't read a paragraph and can read non-stop happen in ADHD depending on your interest in reading.


Already has to a degree - hyperactive type and inattentive type.

But you've also got to consider personal taste - hyperfixation happens when it's something you're interested in. Someone else might hyperfocus on video games, or trawling ebay, or researching a topic.


I am famous for "ratholing" for hours on some specific thing that caught my interest, usually with little sense of the time that has passed, and almost exclusively on problems that aren't at all what I'm supposed to be focused on at the time


Not a disagreement, more of a yes, and: “interested” may not necessarily be personal interest, and the subject of hyperfocus might not be a personal interest. It can also happen manifest, for example, as obsession with solving some work problem or chore which isn’t appealing at all until begun. Or even some unrelated yak shaving tangent that fits none of these categories.


Yeah, from personal experience I'll happily sit there and hand-lint a file. It's... calming? So I wonder what the right word to use in place of "interest" is. Attention?


I read recently that they're starting to view hyperactive and inattentive as different presentations of the same symptoms/root causes rather than different types ie the same things that make hyperactive ADHD not sit still make inattentive constantly go off on mental tangents.


I have both things going on, you might say. When I first settle down with a book or long form reading after not having done it a while, it's a considerable effort to focus. Typically at some point after forcing myself back to the text and rereading paragraphs many times I will get immersed in the text. Focus improves and reading speeds up, and for a book I might binge-read it in a couple days, with it occupying my thoughts even when I put it down.


Recently I've heard ADHD described as "the inability (or weak ability) to direct attention" rather than a lack of it. So hyper fixation could be a symptom.


I have the same experience with books. For me I believe it's due to a complicated synergy between my autism and my ADHD.


Same here. I try to see the bright side, where I am basically forced to write "clean" code because anything else just doesn't fit my buffer.

And I feel a kinship with current LLMs with small context sizes.


It's funny, my input buffer may be small and it's a large mental drain to read busy code (ADHD con), however once I grasp how a code base works it's just there (ADHD pro).

It makes me wonder if it's related to the hunter-gatherer origin hypothesis. In that scenario covering large tracks of land was beneficial and that ability can be used to mentally map code. The flop side this also leads to distractibility and boredom paying attention to monotonous details.


This is true for me as well, but the amount of time/effort it takes to get to that point can vary wildly, sometimes being a bit prohibitive, other times never fully getting there for the duration of the job. If I wrote the code, or was present for most of its growth, or have spent whatever time it took in this instance, I am pretty damn efficient with it indefinitely, as long as I don't for whatever reason take a huge break from working with that codebase. (As a ZFS fan, the silly analogy for my brain that I half-jokingly give people here is that I have a high amount of L2ARC)


I did have this weird lightbulb moment a few weeks ago where for a moment I felt like an LLM was the perfect parallel to describe how my brain worked


> I half-jokingly tend to describe it to people as having a "small brain buffer"

The good news is that "smallness of bran buffer" doesn't matter very much if you are trying to implement anything non-trivial. Most useful software is so complex that it's impossible to keep all of it in your head in any case. But we can divide and conquer complexity without holding too much in our limited human heads at any single moment in time.

The real art is how you divide and conquer. Having a bigger "buffer" merely delays (slightly) the point at which you have no other choice but to start doing just that.


I disagree. It really does matter if you have any afflictions that exacerbate it.

It maybe doesn’t matter for relative performance versus other developers but it does matter for the individual.

Having frequent context switches just to keep on top of your work can be really draining.

How much it matters also really depends on the languages you’re working in. Taking a couple languages I use regularly for example:

Python (or any dynamic language ), as much as I enjoy it, can be a nightmare because of how dynamic it is. You have to keep way more of any given app in your head at a time.

Rust by comparison (or any similarly static language) is much easier for me to just focus on the very local code without caring about the rest.


On the other hand, I'd encourage you to see this awareness of cognitive load as a strength. Good code is approachable, and some of the worst code I've written was when I felt the most brilliant. If I've used all my brainpower to write something, then what do I do later when I need to debug it or integrated it? So now I really try to write such that I lower the cognitive load for future me and other developers.


At least for me, naturally having experienced needing to context switch often (getting disctracted or multithreading or refreshing forgotten context), I have gotten good at it not by remebering more but by storing the context in an easily retrievable place. I keep tabs open, I print out stuff even when breakpoint debugging, and I reduce my cognitive load by default. I remember where to look to find the data, because I don't remeber the data. I keep the structure in my head and remember only where the details live. like abstracting


People are shocked when they see how many tabs I have open. I just checked and I've 21 windows each with atleast 10 tabs, but the more frequently used one are 30+. But it really does help. I also group them by topic and when I need to focus I start a new window with the tabs for that particular task.


That's not much tabs…

I have alone in this window here 4682 tabs open.

How it comes? Well, I just don't close them until I have decided whether I want to bookmark them. After month with a browser sessions quite some tabs pile up. (When it gets out of hands I bookmark whole windows… So there remains at least the theoretical possibility that I sort this out later. Even that happens only rarely.)

Before someone asks: From the technical standpoint it's quite easy to have so many open tabs. All you need is Firefox, Tree-Style-Tabs, and tab auto discard ("sleep"). It fits than in under 2.5 GB RAM even (which is less than Chrome with even a few tabs).


Modern python is no longer dynamic. Type checking is a big part of python now. Python also has ADTs which match it in power to rust.


That's great if you're in a codebase at a workplace which enforces python type annotation checking.

Unfortunately, I can tell you from experience that out in the real world, at fortune 500 companies, there are millions of lines of untyped Python doing critical work while being full of subtle type errors which should be compile time errors but will rear their heads as runtime errors when it's least expected.


It's changed a lot recently. I would say the majority of companies now use type checking.

It's def not as pervasive yet as Typescript.


Do you work at the majority of companies? I can tell you that I’ve worked with code bases from each of the FAANG companies that don’t use type checking, because it’s optional. Anything that’s optional in a language will not be ubiquitous

Your statements are incongruent with the reality of production.


I've likely worked for more companies then you in the last 5 years or so due to my personality. I don't stay at one place for long. Actually I've probably worked for more in my entire career, but only the last 5 years should be relevant.

Even so where I worked is only part of the equation. You can look up which companies use python types in a google search.

Let me add more nuance, companies that use python as glue code as in scripting languages don't use types. This makes sense as python is considered not as important as the main language.

For companies that are primarily based on python... the majority use types. The ones that don't are actively migrating. This includes faang and subsidiaries. In fact type checkers for python are likely to be built by faang or companies close to that tier.


I’m unsure how you can so confidently make that assertion.

Even at companies that do use type hints, there’ll be tons of legacy code that won’t (and probably tons of new code as well if we’re being realistic).

It’s like saying every company uses bash scripts. It’s not really indicative of the prevalence in actual code bases in production use other than saying it may be used somewhere within the company.

Between this and your other comments, you’re making very widespread comments that can’t logically apply to everything and nor do they in my experience.


>Even at companies that do use type hints, there’ll be tons of legacy code that won’t (and probably tons of new code as well if we’re being realistic).

The first part is true, I agree with that assessment and I never made a contrary claim. Companies are migrating.

The second part in parenthesis, is less common, I don't agree that it's a generality among companies that have python as a primary language.

>It’s like saying every company uses bash scripts. It’s not really indicative of the prevalence in actual code bases in production use other than saying it may be used somewhere within the company.

I don't even know what you're getting at with this example. Tons of companies use python "somehwhere" within the company. I'm sure in those cases it's often not typed.

But for companies or teams that use python as a primary it's typically typed or in the process of getting migrated to be typed. That is the nuance I added to my claim.

>Between this and your other comments, you’re making very widespread comments that can’t logically apply to everything and nor do they in my experience.

Except you made statements that are factually wrong. I literally ran mypy on some code and your statements were categorically incorrect from your other comment. Usually these debates are anecdotal so logic doesn't apply as it's just fuzzy opinions regarding social aspects of society. But that's not the case here. You made factually incorrect statements and that has bearing on the correctness of your anecdotal statements too.


The problem is less the language and more the culture around it. Typing will always be second-class, culturally. Additionally, mypy feels much slower to use versus something like Rust’s ‘cargo check’.

Regarding ADTs: do type checkers ensure totality (all cases covered) when pattern matching?

(I’m not casting aspersions here. It is what it is, you know?)


>The problem is less the language and more the culture around it. Typing will always be second-class, culturally. Additionally, mypy feels much slower to use versus something like Rust’s ‘cargo check’.

Agree with mypy being slow. As for the culture it's largely moving in the right direction and it's at a point where you're more likely to find a python shop to be using typing than not. Of course for scripting I think people may not be so readily adopting types.

>Regarding ADTs: do type checkers ensure totality (all cases covered) when pattern matching?

Yes it does.

Cast your initial presumption aside. This was a huge surprise to me too. Everybody loves typescript but basic usage of types in typescript is actually less powerful because it doesn't support this.

You have to use it in a specific way though.

https://tech.preferred.jp/en/blog/python-exhaustive-union-ma...


Type hinting , not assertion, is a part of Python.

Type checking is analysis post facto and doesn’t account for dynamicism of classes.

The very ability to self modify an instance at any time or use __getattr__ makes it incredibly dynamic.

It also doesn’t account for every dependency having accurate type hints or any at all.


modern python development is used in conjunction with an external type checker. That's what I meant.

My mistake for not being clear. Obviously the python interpreter itself does not do any type checks.

It's sort if like how modern development with javascript is used with an external compiler of another language (typescript) that compiles a typed language into one without types.


Yeah but my rebuttal is that even with type hints and adequate type checking, that Python is too dynamic to trust consistent behaviour.

An object can conform to a type in name only, but that’s not enough to tell you what methods or attributes exist on it at any given time.


monkey patching is rarely done.

In practice, your rebuttal almost never occurs. I believe these type checkers can detect monkey patching and trigger a type error or warning.


In practice it does occur very often. I’d suggest not ascribing your own experience to my own. I’ve been professionally coding in Python for over a decade in very varied codebases.

There are tons of libraries in many companies that use dynamic attribute lookups for efficiency reasons, when adapting to different data sources and the like. Or pass through lookups on nested objects. Or they’re dynamically looked up on bound libraries from other languages and frameworks.

And these type checkers cannot detect it. Python’s dictionary access is not guarded by the type checkers and neither is __getattr__.

If you’re making such wide sweeping statements you need to be more familiar with the subject matter.


[flagged]


Your code is not what I’m describing at all. You’re just talking past me without actually trying to understand what I’m saying or reading the specifics.

A Python instance is effectively a dictionary. The contents of that dictionary can be mutated at runtime to disagree with the type that is specified. This is perhaps an antipattern but it’s not uncommon.

Attribute lookups can be dynamic as well without any typing. The typing can be lofted out and specified at the lookup site but that is also no guarantee of what you’ll get at runtime.

You cannot type check this dunder method access.

class A(dict):

    def __getattr__(self, name):

       return self[name]
You may say “oh but that’s uncommon or bad practice” but it’s unfortunately quite common in many production systems because Python has encouraged embracing its dynamicism for decades. That’s not a flaw, that’s a power. But it definitely has its pros and cons.

One may also say that “well that’s just void pointer casting in C or other languages”, which is also true and equally problematic. That’s why newer compiled languages like Rust and Swift have options for sum types / enums that carry a value to design around it in a type safe manner.

———

To your other point on courtesy…

If I say you’re unfamiliar with the details, it’s because you yourself said you aren’t sure YET you claim your world view of type checking is correct and dynamicism is rare. You don’t read my comments and substitute your own.

If I am being rude it is only because your default world view appears to be “the ideal in my head is the way it is”.

Take for example when you say “your rebuttal almost never occurs” or when you say pretty much every tech company uses type checking. You ascribe your own world view with no room for greys or real world variance.

If I say: “but this is my experience” your response is “well that’s not the reality for most cases”. So if I am rude it is because you talk in absolutes and only correct yourself when pushed back upon.

Perhaps do not talk so assuredly in general. Your experiences are not that of others yet you talk as if they’re the norm.

So if I am rude, it is because you are being rude in your responses and perhaps don’t realize it. You’ve basically told everyone who’s responded to you that their experiences aren’t correct.

Anyway I shall not respond further to you because this discussion is hyper fixating on something that was only intended to be an example. It serves no purpose to continue arguing


>A Python instance is effectively a dictionary. The contents of that dictionary can be mutated at runtime to disagree with the type that is specified. This is perhaps an antipattern but it’s not uncommon.

Yes I know. but the you will see the code addresses this. Mypy caught the monkey patching. Look at it. I added a property to an instance dynamically and mypy caught it.

>You may say “oh but that’s uncommon or bad practice” but it’s unfortunately quite common in many production systems because Python has encouraged embracing its dynamicism for decades. That’s not a flaw, that’s a power. But it definitely has its pros and cons.

I addressed this in the post you replied to. I specifically stated it. Let me quote it:

"__getattr__ is type checked too as long as you type the method signature, I have seen some weirdness around this area for type checkers but that's not the main problem. I understand where you are getting at with __getattr__. When you use this you're essentially creating something akin to a function with a string as a parameter. The contents of a string can't be type checked and if all methods are defined this way on a class none of it can be checked."

You addressed a point I already addressed, and didn't address the points you were factually wrong about: Dictionary access is type checked. Monkey patching is also type checked.

>Anyway I shall not respond further to you because this discussion is hyper fixating on something that was only intended to be an example. It serves no purpose to continue arguing

Completely agreed let's move on from all the BS. But you said some things that were factually wrong about python Dicts not being type checked. You said dictionary access is not type checked. Please address those things. I am interested in your viewpoint on this matter because I can learn something if my statement is wrong.


[flagged]


>Nonsense.>Python "generates" this: Traceback (most recent call last): File "/home/me/test.py", line 3, in <module> print(x[3]) ~^^^KeyError: 3

So? python generates that but the point was demonstrating type checking. The output you see is not generated by python. It's generated by an external type checker. The type checker caught the python exception without running any code.

>> The contents of a string can't be type checked […] Nonsense. This may be true for Python's type checking tools, but that's not a general limitation of type checking.

The context is python. We're talking about python. I'm making a statement about python. Nobody is talking about scala. I don't know why you're getting into scala stuff.

There is literally nothing in my statement to indicate I'm making a general statement about type checking. But I will say checking for the contents of a string is rare for a type checker to do. That is a general statement that is generally true.

>Some people have indeed infinite egos. If there would be just anything else besides that… >I wonder every time why the most clueless are the loudest. That's so embarrassing.

Hey can you please stop being rude? The guy made factually incorrect statements and so did you. That's not an ego thing. It's just true that he's wrong. Everybody makes mistakes... people shouldn't get worked up about someone else identifying a mistake.

I too made a mistake. And I admitted my mistake. See my first couple of posts. I admitted I was wrong: the python interpreter doesn't do type checks. It was an error on my part for not clarifying I meant python development in general with external tools.


> Modern python is no longer dynamic.

Of course it still is.

The type hints have no runtime semantics whatsoever.

The actual type-checking still happens at runtime in Python, and is not related to any static checks. Out of the perspective of Python's interpreter the type hints don't even exist. They're like comments…

Static type checking is not part of Python!

> Python also has ADTs which match it in power to rust.

All Turing-complete languages match each other in power…

Besides that:

Python doesn't have a static type system in the first place, so it can't compete with Rust in this regard. That's not even the same game.


>Of course it still is. >The type hints have no runtime semantics whatsoever.

I clarified my meaning in another comment. You're right, I meant to SAY modern development involves type checking with python. But not with the interpreter. My mistake for not being clear.

>All Turing-complete languages match each other in power…

I obviously mean type checking. Additionally what you're saying isn't even true. Not all turing complete languages match in power or capability.

>Python doesn't have a static type system in the first place, so it can't compete with Rust in this regard. That's not even the same game.

Modern python development is done in conjunction with a type checker. Sort of like how modern javascript development is done with typescript. That levels up the game to where python is roughly in the same class as rust when it comes to type correctness.

Depending on the type checker python matches the power of rust, roughly. There are differences, but for ADTs they are pretty similar in capability.


This idea of a 'small brain buffer' is why I love writing small atomic notes in wikis [1]. It lets me break problems down into simple pieces that I can later assemble upward in abstraction, rather than try to hold complex ideas in my head and compile them together in one big document. The buffer waxes and wanes, and I need to be able to adapt my writing process so I'm productive no matter where it is at the moment.

[1] https://notes.andymatuschak.org/Evergreen_notes


I've been doing this too in Obsidian and really like it.

Notes can be as short or long as feels right. If they need to grow they can grow, sometimes they get split into sub notes.

I have notes with titles like "Cardboard furniture", "Staging patch changes in VSCode", and "Hilbert Transform".

Some are one or two sentences, some are more like essays with subheadings.

Small notes tend to have one or two links to other notes. Longer notes have more links and tend towards centralization (eg all booking info about an upcoming work trip).

Search makes it easy to find the notes that don't have links.


Yep, Obsidian is the best! Two patterns I use a lot to help with this small brain buffer problem are collections & streams.

Collections (sometimes called 'Maps of Content') are sets of internal & external links or resources around a particular topic, e.g. 'Self Deception', 'Creativity', 'Air Quality', etc. The page names represent the topic so they're easy to name, and if I can't find a page I go to these first before search.

Streams are pages with date headers that contain small notes around a broad topic. For instance, 'Passing Thoughts' are random ideas, 'Story Prompts' are ideas for stories, and 'Inbox' are links to read and any notes I have on them. Streams do three things for me:

1. I don't know how big an idea is until I write it, and this stream pattern lets me optionally break ideas off into their own pages if they get to a certain level of size/complexity.

2. I can quickly capture without having to create a new page, since I'm at ~2000 now and each subsequent page makes search less effective.

3. I can avoid the challenge of naming pages, which is often harder than it seems. For instance, I've taken to naming certain pages like Andy M's evergreen notes style of declarative claims/statements, like 'Recognizing our influences empowers our creativity', 'The curiosity driving information addiction may be due to a sense of deprivation', and 'Good ideas deserve good stewards'. To do this clearly, concisely, and scoped correctly is its own challenge and worth the time investment because it really lets me build on strong foundations, and it prevents similar pages from proliferating in search by being easy to find.


I like this technique. Thanks.


It's been a game changer for me. I always did this anyway in other Notes apps, but Obsidian just does a great job of making it feel like it almost prefers you to write notes like that.


How are you aggregating in Obsidian? I feel like I haven't cracked the code on this one


Do you mean other than MoCs (Maps of Content)?

I use the Waypoint plugin with folders to create auto-generated maps of content within folder hierarchies. Works nicely to give me an index to use as a first port of call in a subject and works well with graph view.


Do you have this issue of where you start in on something, you’re making progress, there’s velocity, the work seems cogent, and then...it’s gone.

Whatever point you had, whatever drive was there, is just gone. The wind has calmed and the sails just drop loose on the mast.

This happens to me, both in my own work, or writing something like this, and then I just dump it.

Kind of aggravating.


Yeah definitely! I don't trust that my writing energy nor my memory will continue into the future, so I have a few tactics I use:

If an idea flashes into my head, I quickly write it shorthand anywhere. See my other comment (https://news.ycombinator.com/item?id=36726680) on having 'Streams' pages that offer an immediate working space to write. I let the idea expand with that progress & velocity, and once I've reached the end of my writing juices, if it's important enough I copy/paste it into a better place. Organizing is one task that can be done when the winds of flow state calm.

Similarly, breaking ideas into small bite-sized parts helps because those are easier to complete with less energy. Better to make incremental forward progress than keep trying to do the same big thing over and over.

Also, I try to start with a summarized version of what the thought is. Again, shorthanding the key points is helpful. It's easy to have a big thought that has several constituent parts and immediately dive into some details on one of those parts, only to find that the big picture sort of disappears. Better to capture the TLDR first.

Finally, I'm okay with abandoning ideas, I trust that the important ones will come back, and I make sure to leave affordances for myself so I can always return to what I was doing later. Some things I do to make revisiting ideas easier include:

1. I'll review my 'Passing Thoughts' stream at least once a week to see if my new context or a new perspective allows for the ideas to come back.

2. I keep track of work-in-progress work with a #WIP tag and new pages I've made in 'What's New' stream.

3. I keep track of my thought process with occasional self notes in 'Whats New', e.g. "I dove into X by expanding Y and Z but think I have to reorganize section A to fit it in"

4. In the organizing step I think carefully about how I name pages (including using aliases)

5. I densely link all related ideas so there's many ways to find them.


That's similar to how I organize my ideas as well. I personally use google keep for this, or just plain text files. I don't try too hard to connect them, maybe I should though.


Yes, it's the not so nice flipside of hyperfocus.

What I do is: Not feel in the slightest guilty about it, because I just came off a stretch of real 10x productivity, and then go do something else for a half hour or so that exercises a different part of the brain - grab one of the many musical instruments around me, for instance.


All the damn time. ADHD is a curse.


You should check out TiddlyWiki as it’s designed around the concept that small linkable notes are the best way to organize.

https://tiddlywiki.com/


>it is easy for me to flounder when debugging if I feel unable to visually see or mentally hold the entire problem in my head at once.

So much this. Fuck OOP-inheritance-thinking for spreading implementations to so many files.


I also find this splitting-out that happens with OOP extremely unhelpful. I'd never considered that problem being exacerbated by ADHD, but it makes perfect sense.


The opposite to this: sum types (enums with values).

You have all the logic in one place, and if you forget to handle a case in most languages you're forced by the compiler to be explicit.


In my first job after college, we had codebases that took both approaches: an OPP-y Java codebase with lots of inheritance and some FP-oriented codebases that used Scala case classes. As a novice, I definitely found the case class handling easier to browse and understand at the time.

The OOP stuff can be made a little easier to work with if your editor can show a lot of buffers at once and makes it easy to search through buffers. Trying to just flip through all of those files with tabs would have been really hard for me.


> flounder when debugging if I feel unable to visually see or mentally hold the entire problem in my head at once

Surely this is a general problem and not specific to programmers with ADD? Or do you mean you are relatively worse at "needle sting debugging"?

You second paragraph is just sane things in general. I wish your mindset was more common among my colleagues ...


ADHD is a quantitative disorder, not qualitative.

To some extent, everyone gets mild versions of ADHD symptoms, especially if tired/bad nutrition/no exercise, etc.

ADHD is when these symptoms harm quality of life significantly across many domains and since you were prepubescent (it's a brain malformation, not a series of bad habits)


To expand on that, I think it can be clarified somewhat by describing the mechanism at work.

People without ADHD have a generally-working motivation/reward system which is undergirded partially by dopamine. This neurotransmitter is relatively low in ADHD brains, which seems to lead to signal:noise issues. That’s likely a cause of having a highly distractible baseline state, but also leads to bursts of dopamine seeming more salient and arousing. You have a harder time regulating motivation and evaluating perceived rewards.

A lot of what happens in the brain is based on relative states. If your dopamine is always low, something making it spike will seem more interesting than it necessarily should. If your dopamine trends higher, you’re going to have an easier time regulating interest, motivation, and an ability to transition through things you need to do — particularly things you don’t want to do.

The inverse of a dopamine spike is a dopamine drop. This impacts the brain in important ways as well. In a person with ADHD, going from a high to a low can often cause serious declines in mood and energy. In a more typical brain, the same effects are present but less severe.

So while these are all normal challenges people face because they’re the product of the same brain mechanics, a low dopamine baseline makes these mechanics more extreme in people with ADHD. They are primed for distraction and clinging to things which increase their dopamine. This could be video games, relationships, drugs, hobbies, etc. It could even be work too. They’re also primed far harder falls from their highs. This extreme oscillation often means some aspect of life suffers due to a lack of balance. That’s why it can be a disability.

Hopefully that helps explain the quantitative aspect.


It's also why hyperfocus is common and more extreme in ADHD. You can't always control it but when you do get into something everything else becomes a blur and you can go for hours because that high burst of dopamine obliterates everything else.


Thank you for this very straightforward and concise explanation

I'm almost 40 and just realizing that I probably have some version of ADHD and your description of the Dopamine roller-coaster fits perfectly with my experience, both good and bad.


I highly recommend reading up on it if that particular aspect resonates with you. Driven to Distraction is a very useful and informative read with a decent amount of technical information as well as personal accounts to help relate how the condition manifests.

It’s very diverse. It isn’t always bad, but often is, and even people with god outcomes can often benefit from the insight of understanding why they’re different.


And to think psychiatrists are bestowed with the authority to mass-prescribe amphetamines upon accurately deducing the presence of ADHD from brief patient self-evaluations is eyebrow raising.

Looking forward to when a portable EEG monitors are available at an affordable price.


Like most discussions of ADHD it’s hard for people with it to explain because the only terms they have for it are the same terms everyone else has for things, so it sounds like the sort of problems everyone has. But when you have ADHD it’s also different because you can’t “just do X” where X is something that works for people without ADHD because something that you can’t describe is different.

An (imperfect) analogy I saw recently was it’s like talking about debt between someone in a middle class household (non-ADHD people) and someone in a poverty household. Sure a lot of the problems are similar in both word and shape, but the middle class person can’t imagine “had to take out a pay day loan to buy used tires from joes tire shack” and the poverty person can’t understand “we had to replace the AC this year so we cut back our spending, and had to skip the family reunion at the beach until the credit card was paid off”.


Relative to my peers I have often found myself to have lower tolerances in these areas. It sometimes surprised people when I felt something was complex (to work with, not conceptually) when they did not, or at the length of time it could sometimes take me to fix something that involved digging through a lot of code.


Hey you are a younger version of me ! I completely agree, and I am now very selective wrt where and with who I work and I still love my job.


I relate to this, one reason I avoid IDEs is the "window" where you can actually see code is so limited. It feels like coding through a paper towel tube.

I'll make my life way harder in order to optimize the view of the code.


"and why I almost exclusively join small/new teams who don't yet have an enormous codebase that I'll have to wrap my head around."

How do you find these teams? In my 11 years, I don't think I've ever found myself on a greenfield project.


I live in Southern California where there was no shortage (until recently, at least) of very early-stage startups, and for years I exclusively used startup-focused sites like AngelList (now Wellfound) when jobhunting. The largest startup had 12 people when I joined. I worked for a company of 40ish once, but they weren't primarily a tech company and engineering consisted of only 3 people.

I should perhaps disclaim that while this worked perfectly for me for close to a decade and I was never without work for more than a couple weeks at a time, I have currently been unemployed for a few months and have had a considerably more difficult time than at any point in the past.


Going with, in general, a company with less than 50 employees doesn't have to meet many of the requirements larger ones do when it comes to things like the FMLA.


Yeah but the demand for talent was such that, for at least a long time, the benefits and perks at startups usually exceeded those of the bigger companies. This may not be the case now, but I still think what you describe is not true often enough to be considered a general truism


Not true enough? It’s literally how the laws are written. They exempt companies with fewer than X employees, where X is almost always 50


> As an engineer for 10+ years with pretty severe ADHD (unmedicated...

How did you do it? I wasn't diagnosed until my early adulthood, but even with medication, I feel like I can barely keep my head above water.


I was diagnosed and treated young (around 12-13). I was not very well controlled at all at the time, and wouldn't be until my 20s, but I at least was aware of the problem and seeing a doctor regularly from that point on.

Everyone is different, but the key things for me were 1) never settling medication-wise until I found something that truly worked well for me (a process that took years but well-worth it) and 2) utilizing my newfound focus to put systems in place to help me be productive while working around my particular weaknesses. (By this I mean how I handle todos, calendars, notes, workflow processes, etc.) #1 was absolutely crucial and is when I started "feeling good" but #2 is what took me from "feeling good" to "reaching goals".

Regarding the medication, I had a very good doctor. In the beginning, medication would either not work for me, or it would, but for a limited period of time. I was very used to switching things frequently. So later when I would land on something that was okay-ish but not great (a point where many people consider themselves lucky and stop experimenting) I would note that it was helpful in case I wanted to return to it later, but we would move on and try something else, hoping for something better than "okay". And eventually we landed on something lesser-prescribed but that worked quite well for me and continues to work to this day. #2 was still a multi-year battle that I haven't fully won, but I've made enormous strides and I so badly wish I had gotten myself to this point a decade earlier.


> 1) never settling medication-wise until I found something that truly worked well for me

Diving into the detail, what medication have you tried and how did it work/not work for you?


Finely aged, unmedicated, severe ADHD fist bump! I share your struggles. I find drawing lots of diagrams helpful.


When a problem has my attention, nothing can phase me. I get hyperfocused. Music, colors, the need to eat or sleep.. doesn't matter. All that gets put aside, the thing that has my attention has my full and undivided attention

When something doesn't grab me, it takes enormous effort to maintain focus. It's not that other distractions are just so incredibly moving that they take my focus away. Rather, it's that it's physically difficult if not painful to focus on the thing that evades my attention, so by comparison, anything else is better.

I have no idea if my comment speaks to the paper. I started reading it because I am at the grocery store and the checkout line is so boring, but then the paper didn't quite grab me enough either, so I'm slightly more interested now in whatever happens to be on some other screen.

But if it is studying the effect of distractions, I think they have the wrong idea (or I have something other than ADHD), because it's never about the distractions, it's about the interest in the thing that needs my attention. Nothing else matters


> Rather, it's that it's physically difficult if not painful to focus on the thing that evades my attention, so by comparison, anything else is better.

If I had to describe the feeling it's the mental equivalent of trying to push the same poles of two strong magnets together, while the distractions are like trying to avoid the north-south poles from attracting to each other.

Especially the part where as soon as you stop applying force (i.e. mental effort) it flips.


Wow, that is a perfect metaphor for what I feel


"When something doesn't grab me, it takes enormous effort to maintain focus. It's not that other distractions are just so incredibly moving that they take my focus away. Rather, it's that it's physically difficult if not painful to focus on the thing that evades my attention, so by comparison, anything else is better."

Sounds like me trying to do geometry proofs homework in my freshman year of high school. I wanted to cry it felt so shitty forcing myself to do it. Eventually, I just stopped. Didn't help my teacher was insane and demanded we make copies of our work using carbon paper in the year of our lord 2005 and would grade you down if you turned in something off a copier/scanner. More tedium on top of dreadful tedium.


I'm part way into a massive stack of monotonous corporate e-learning modules and this is hitting me hard.


Sounds a fair amount like me. I'm autistic.


There are overlapping symptoms between the two. There is also comorbidity, if I'm not mistaken.


> There is also comorbidity, if I'm not mistaken.

Last I checked, it's uni-directional; if you have autism, you are more likely than the general population to have ADHD, but not the other way around.


I had something flickering in my IDE the other day while I was trying to learn my way around a large monolithic codebase at a new job. I couldn't executive function well enough to stop/hide the flickering, and I couldn't ignore it. I eventually had a meltdown and left the room (wfh, thankfully). It took me hours to recover.

Having a tidy, visually "muted" workspace - both in a literal sense (the IDE) as well as a conceptual sense (the code structure) - is very important for coping with my ADHD+ASD. Unfortunately, trying to sell DRY+KISS+YAGNI+SOLID as an accessibility/inclusivity concern has not worked out for me yet.

There's a lot of intellectual elitism running in the veins of our profession, and it makes me sad that it's often likely just a shield for other people nursing insecurities about similar struggles. We need to work on our empathy, and coding empathetically, and designing UIs empathetically.


This hits home. Especially the "hours to recover" part. I never quite appreciate how long it takes to right the ship after a day like that.

Do you have any tips for how you structure your digital workspace? Favourite tools or methods to help support your executive functioning?


The only thing that works for me is separating things into very distinct "realms" and only having one realm open at a time. No exceptions.

For example one realm is for communication. Slack, Browser, Email, and Calendar can be open. Nothing is really a distraction from anything else here. I'm just being "at work" and communicating in this mode.

Another is for coding. Literally the only things open are vim and a terminal. NO browser and NO Slack. If I need documentation then I didn't design well enough, and design is it's own realm. I should know the libraries I'm using, and anything else is easily handled by vim's autocomplete/intellisense or navigating to the code.

The other two explicit realms are Writing and Design/Planning. There are more adhoc ones, but I really try to avoid adhoc-ness.

Switching realms is a hassle and requires super deliberate action. This means I can't just randomly switch between tabs and code and Slack and email and social media and just...kinda looking at things? That was my main problem. It was too easy to "move" and so I could never stop moving and somehow the entire day was gone. At no point was I goofing off, but my day just disappeared.

The only issue is that work people really really want my dot to be green on Slack at all times. They even give me the room to be on my own, but literally just having Slack open is a weird attention drain and I don't really know how to convey that. This leads to me getting most of my work done after hours and working way too long :/


> If I need documentation then I didn't design well enough,

I need documentation because my coworkers didn't design well enough.


This series of videos by Airforce Col. Mark D. Jacobsen may be helpful:

"Tools for the Life of the Mind. These videos are intended to help my students at the Air Force's School of Advanced Air & Space Studies (SAASS) develop effective mindsets and workflows for doing rigorous academic work. I will introduce a range of available tools and discuss my own workflows. Although SAASS-focused, they should appeal to anyone interested in productivity and learning."

https://www.youtube.com/watch?v=KE87q3jjFlg&list=PLHmevVAAXt...


I can absolutely relate to that sentiment. A former coworker once gleefully refactored a bunch of unit tests in a way that reduced duplication and made the reporting marginally better, at the cost of making the cognitive load vastly worse to read them. I told them I couldn't review the PR, though neglected to mention that it was because it gave me a meltdown trying.


Manager: "I sympathize, and don't have ADHD, but everyone else seems to be answer slack messages immediately, so why can't you?"


I heard about a study:

Give a stack of CT images to a radiologist with zero user interface at all (just the original CT images shown), and ask them to do a read. They cine through the stack a couple times, and do their dictation. Have them do a few dozen cases.

Give the same stack of CT images to another radiologist in an unfamiliar user interface, with buttons and menus all over the place, and ask them to do a read. Tell them to ignore the UI, they should just cine and dictate. Have them do a few dozen cases.

The ones with the distracting buttons were indeed distracted by them. They were slower and less accurate in their read.

Terrifying, honestly.


This reminds me of how Rob Pike mentioned his dislike for syntax coloring as extra cognitive load, which seems odd in regards to current IDEs filled with information and stimuli.

With LLMs now directly interfaced to IDEs, it is likely that any obstacle will trigger a need for immediate AI powered dopamine reward. I wonder how programmers brains are going to rewire in this regard, even for those not subject to ADHD, but I suspect their sense of focus will worsen even more.

In the coming years, I wouldn't be surprised if writing high level code will slowly turn from mainstream to an edge art requiring focus and commitment, like assembly programming has already become. The casual developer will probably be just giving directives to an AI code bot and be incredulous when told how people used to write code themselves, pondering for hours on a bug in a bare boring terminal. A possibility is that what is perceived ADHD today might become the norm with future humans.


IMO syntax coloring lessens the cognitive load. What is "high level code". You are telling a machine what to do and to do it as fast as possible. Can you still make money telling a machine what to do but not as fast, depends but most likely. We code to ultimately make money in one form or another. Whether you keep that money or put it towards altruistic causes is independent of the shared commonality.


I have to agree. Syntax highlighting lets me recognise patterns in code without having to read it, and I actively avoid reading details I don't need.

(To me that is also what annoy me about IDE's - I want to focus on the shape of the code, not other stuff. When I focus on code, I want that code to be all that exists in my mind at that time; but I suspect whether or not you like IDE's is orthogonal to how you like your code presented)

But I also feel like there is an interesting range within developers from those of us who pattern match skim and "zoom" in on details to those developers who read code in detail, and I'm not sure those of us on different sides of that spectrum see perceptual load the same way.

On one end you find people like me who are purist about syntax and presentation because we're extremely conscious about avoiding having to read everything. E.g. I remember code by overall visual shape, and I can hold a lot of that in my head at once, but I don't remember what it says. I know where to find things rapidly, but not the details. I never try to remember the details.

On the other extreme you find people with like their code to look like K or J "line noise" or mathematical notation because it's short and compact which helps when they're deciphering the code symbol by symbol.

To me, the latter is the height of perceptual load, because I can't just look at it and have a rough idea what it might do, but I understand that some instead see it as stripping away extraneous details.


Interesting that you mention shapes, because for me that extends to keywords and stuff, e.g I don't care if def or class is highlighted, the shape of the code and the tokens tell me that, so it being coloured is redundant and thus useless cognitive load most of the time.

The main use I have of syntax highlighting is instead making sure the computer and I agree on these tokens being what I expect them to be; it's a form of live parsing error control, a direct feedback loop, not a way for me to parse code.

My ideal colour scheme is none at all, save for subduing comments (and being able to toggle that, subduing code instead) and colouring things in/around errors when a mismatch is detected (typically, unbalanced delimiters)


We should organise a study of our own: does syntax highlighting help or hinder and does the answer change based on your level of ADHD traits.

Could be quite interesting!


I have been diagnosed with ADHD and got an especially bad score for working memory.

I find myself tired out really quickly by a lack of syntax highlighting.

With syntax highlighting, I feel like I am not really reading the code anymore, but instead just registering the patterns and parsing the color tokens.

I think the most important aspect for me, is that I don't spend time reading where a token starts and ends. Ex.: If I wanna check the function name, I just have to read the only yellow thing in around this area.

I can register the tokens on a whole page in seconds, if the code isn't too dense. I just tried the same with syntax highlighting disabled and would say that I took about twice as long to parse the entire page. (I used a different, but similar file from a microcontroller HAL)


It's not adhd specific but https://arxiv.org/abs/2008.06030

After reading this I implemented a code theme based primarily around typographic variation like weight rather than color. It uses only two colors (black and deep purple) in two weights and one italic each. I have pretty severe adhd and it's hard to judge but after using it for a few months I think this is better for me. Previously I had been using solarized light for nearly a decade for probably similar reasons.

Nano emacs was created by the author of that paper and its default themes are based on it, if you want to try it without committing to hand-rolling a theme. Personally I found that one too "light" (typographically, not color) but I also have relatively poor vision and like a large and heavy font.

https://github.com/rougier/nano-emacs


I also immediately thought of Nano Emacs when I saw this article.

There is a short talk (6min) from EmacsConf 2021 where the creator of Nano Emacs talks about his reasons for designing a simpler interface, I found it really interesting:

https://emacsconf.org/2021/talks/design/


Yeah it was kind of a mind blower to me. I'm not in a big hurry to settle an opinion but I have a feeling these are ideas that I'll be exploring and influenced by for the rest of my career.


Note: there is an effect that less eligible font makes for deeper understanding (it makes the text harder to read thus your conscious has the time to process before your fast intuitive system I jumps to conclusions).

While skimming code (i.e., most of the time), syntax highlighting is useful. But it may be also useful to turn it off occasionally, to read some parts more thoroughly.


Syntax highlight never bothered me, and could even be helpful. But I can see where Pike is coming from.

What drives me nuts is autocomplete, because it pops things into my vision automagically while I'm trying to focus on the code, and even effects a mode change (some keys do different things when autocomplete is active).

Almost if not as bad, is when I'm working on someone else's code, and they misspelled a word when defining the class/method/function/variable, and now that misspelling is everywhere in the code because every other time they used the identifier they typed the first four characters max and then mashed TAB to autocomplete it!


> What drives me nuts is autocomplete, because it pops things into my vision automagically while I'm trying to focus on the code, and even effects a mode change (some keys do different things when autocomplete is active).

Totally agree! I disable "press Enter to accept suggestion" in JetBrains IDEs for this reason.


The existence of colored syntax is too broad for a conclusion here. It's the specific highlights/colors/contrast ratios/etc. that determine whether a particular theme is working for or against you individually. Still haven't found my "perfect" scheme, but I bounce around between a small handful based on my mood at the time--sometimes the less-loud ones remove unwanted stimuli, but other times I need the louder ones in order to see/focus on the code.


I prefer colored all the way, specially when "boilerplate" words become less vibrant, like 'def, fun, function, begin, end'.

Pure white/black words are really painful because it causes paralysis when trying to re-read a different page or file from the current one.

I simply get stuck trying to decide what to read first, because everything looks the very same, specially outside the peak minutes of the medication.

About Copilot, for instance I found really easy to give more instructions or context, or restart with new instructions whenever it spills bullshit code, even on programming languages that are unusual for me.

The problem is when to really give up because I still believe that I can convince the bot to spill what I really need. If I'm not familiar to the problem, even knowing that the bot is spilling bullshit, I feel a analysis paralysis deciding if I will handle manually, step by step, like the "old days", or giving more context to the bot.

So becoming familiar to certain classes of problems is something helpful, instead of specific implementations in certain languages.


No, IDEs do not give you ADHD. ADHD has a clinical definition and a physiological cause in a region of the brain.


This is interesting. As a developer with ADHD, maybe I should try paring down my IDE interface. (I do take amphetamine medication though so maybe it compensates enough?)

I find it interesting though that they describe "debugging" as a monotonous activity. Maybe my experience is different from others? I personally find debugging to be way more active than coding! It involves setting breakpoints, zooming about the codebase trying to understand the flow of an app and cutting down code until you get a minimized repro. I'm often doing this live with users or other devs looking over my shoulder while I play with code all over the place and run git bisections. It's often a time sensitive activity too, because a bug might be blocking a release or requiring a fast prod rollback. From an executive dysfunction POV, a bug report is often the perfect kind of task for me, inherently scoped, usually short, with a clear success condition.

I personally thrive on that kind of work and I usually really enjoy being on the support rota (not for too long though!) In fact if anything, I enjoy it too much and bug reports don't uncommonly take me away from the actual dev work I should be doing.

I think the issue is that they probably didn't do the debugging tests under the right conditions.


If you read the paper the “debugging” task is “here’s a bunch of python code, there’s problems in the whitespace, please read through and find the whitespace problems and correct them—this is explicitly not a bug in the logic and no code needs correcting”.

They didn’t test the wrong thing, they tested exactly what they want—a high engagement task and a low engagement task.


I have ADD and I'm struggling to write a more complex Python app right now. I'm hating myself for not being able to easily skip around the code as it gets more complex with more classes and methods. So this is very apropos for me.

Thing is, I can skip around Terraform much more easily (in my normal DevOps role). I'm wondering if Python whitespace isn't "right" for me somehow. EDIT: I'm definitely more used to Terraform, but I can read and navigate nodejs far more easily than Python, it's weird. I also don't want to come across as blaming Python for this, certainly it's incredibly effective for a huge number of people!

My next step is to try to break out the classes into their own files or something like that.

I don't suppose anyone has any tips on managing Python scripts? I doubt this is even 2,000 lines yet :(


As someone on more or less the maximum dose of adderall, here are the commandments as I have been able to divine (for myself):

  - Thou shalt use a language server.  Esp. goto-definition and find-refs, as they shall light your path through third party libs.

  - Thou shalt refactor like a crazy person.  No component should have less than 3 or more than 5 significant members.  A concise codebase makes for a calm mind.

  - Thou shalt use black formatting, and obey *all* recommendations from the linter.  Thus shall the structure of your code always explain it's function.

  - Thou shalt use mypy, as it is The Way.

  - Thou shalt follow the way of the Unix, creating isolated components that Do One Thing Well and define concrete APIs between them.  Thus shall you both tame the Spaghetti Monster and save bits of your work from the maw of the great destroyer This-Is-Too-Complex-So-Let's-Rewrite-The-Whole-Thing.


1: I'm using vscode and the microsoft tools for Python, so I have the LSP for Python installed and running. I've also got treesitter running. The MS Python tools are at https://marketplace.visualstudio.com/items?itemName=ms-pytho...

2: This I need to do. I wanted to use the Traitlets module for my config parameters/data so I need to refactor that part of the app.

I did wonder about something like a TerraformCloud class because so far I have a bunch of methods in there: search_workspaces, get_workspace_id, get_variable_id, is_workspace_used, get_unused_workspace, enable_workspace_module, set_workspace_as_used, set_variable, start_workspace_run, and probably more in future. Should I be breaking those up into TerraformCloud, TerraformWorkspace, and WorkspaceVariable? Sorry for the question, I am still learning, and the terrasnek module is helping a lot with all this Terraform stuff.

3: I do use the black formatter. The only thing I question is its line-length rule.

4: I didn't know about mypy, that's pretty great and I'm super grateful!

5: Yeah, I am probably failing hard at that one. I will try to return to The Holy Path Of Unix.

Thank you for all of these. I'm so very grateful for your help!


Given the mention of great destroyer This-Is-Too-Complex-So-Let's-Rewrite-The-Whole-Thing, you seem like you know what you're talking about, but I can't comprehend the actual lesson you're trying to impart.

Would you kindly ELI5?


I am a professional Python developer. Having a good LSP is critical but it doesn't fix Python's biggest problem - the space scoping. It never gets easier you just eventually "get used to it". Keeping methods to a size that can fit on screen helps a lot but on larger projects this can be a lot to ask for. An LSP + a way to mark code is usually how I like to do things.

Your criticisms are valid. Python is the flavor-du-jour and often shoehorned into places it doesn't work well. One of those places is scaling with lines of code. It just simply begins falling apart as a nice experience once projects reach 10,000-20,000 lines. The space scoping becomes much harder to deal with and cognitive load increases in a hockey stick way after a critical line count. Just wait until you try to remedy some other problems with MyPy only to realize is just a stronger linter and not an actual type system :).


2000 lines in a single script file is a lot to mentally process. Unlike a large utils.py (the junk drawer of a putting module), with a script there is often a wide range of functions and classes.

My best advice would be turn this into a more organized package that you install with pipx[0] on the systems that need it. The click[1] package is helpful too.

[0] https://pypi.org/project/pipx/ [1] https://pypi.org/project/click/


I'll do you one better.

Learn to use search tooling. My neovim setup has ripgrep + a LSP on hand at all times. You can do the same things in VSCode.

If you don't know where something is... search, search, search. 2000 lines or 2 million lines. Searching cuts the mental load because you find what you NEED at that moment.

Need to know what refers to this function or variable.. there's a click or key sequence for that.

And yeah, Python is a bit of a trip to read, the lack of braces makes it unique to parse, if you aren't used to it.


    And yeah, Python is a bit of a trip to read, 
    the lack of braces makes it unique to parse,
    if you aren't used to it.
Huge thank you for this. Just hearing that validation is helpful :)

Good point about searching. I've been using vscode, which I use for Terraform a lot and jumping around with the outline view. I need to figure out if there's a way to search in vscode and edit code while keeping my hands on the keyboard.

Or use neovim, because my beloved Emacs is often slow.


emacs is fine, use eglot.


If Terraform is easy for you, you might try another declarative language like something from the ML family (Elm, Haskell, OCaml, etc)

I get along very well with declarative code but find myself struggling through imperative and OOP equivalents

Many imperative languages (including Python) can be written in a declarative style, but it’s a more challenging practice to maintain


From my own personal experience, it would explain why when I use window managers like i3wm and Yabai versus regular desktop that I am more effective and efficient. Somehow window managers just fits perfectly like glove for my brain.

I don't get that cognitive load overhead when I have million different tabs and windows and programs and notes just splattered in a single screen. It's all neatly hidden away until I need them with good management of window managers.


I'm not diagnosed with ADHD though it's likely that I have a mild form of it. I also use sway exclusively (Wayland i3) and couldn't bear to use anything else at this point.

Though my experience is the opposite, sway allows me to represent the cluttered state of my mind perfectly and to quickly ad-hoc reorder my open windows and gazillion tabs for what I am currently focussing on. It's always chaos but in a comfy way.

Quite interesting to see how different our usage seems to be, for many people tiling window manager help keep order, for me it enables me to survive in a constant state of chaos until i am running out of RAM


I currently is bspwm on my personal laptop, and work on a Mac, and it was a brutal change to go back to a non-tiled wm at work (I tried without as an experiment to see how I'd feel about it when I first got the new work machine - the jury is in: It's absolutely awful; Yabai is heaven in comparison).

As far back as the 1980's when I was using an Amiga, I'd always effectively work full screen or with carefully tiled layouts if I couldn't...


Perhaps a vim + tmux + magnet (mac app) setup could work for you?


I detest vim far more than I detest overlapping window managers ;) Yabai works ok. It's not ideal but its close enough.


Thanks for the recommendation then, I will check yabai out :3


Are there and similar managers for windows like i3wm and Yabai?



I am a successful software engineer (20+ years, worked in companies like NVIDIA, had been at Google for the last decade) and I have dyslexia. I was also diagnosed with ADHD a couple years ago but I do not want to take medications as they did not improve my quality of life.

Syntax/semantic highlight and animations are the only reason I am in this profession. I do not "read code" - I mostly "parse" the shape of it, relying on identations and colors to grasp what is going on. I loved when I worked with Java that Eclipse and IntelliJ refactoring support enabled me to quickly restrusture code the way I can easily understand it.


color and shape of segments of code are far more useful to me than reading, once I've already read that code.

I need those colors and those shapes. people who say that no one needs syntax highlighting are so myopic that it makes my head spin. oh everyone is exactly like you, are they? no one needs syntax highlighting! no one!

80-column rules are the same, for me. arbitrary nonsense rules made up by people who do not have the exact brain that I have, but who assume that everyone has the exact same brain that they have.

this profession is riddled with people who think they know the best way for all software developers to work, and I swear nothing is more perfectly crafted to inspire in me the urge to murder people than people who say these things. believing you know the best way for everyone in your field to work is one of the apex caustic traits of professional software developers, and it is an extremely common trait, in my experience.

everyone is different. there is no enforced development style, or enforced language, or enforced framework which is even acceptable to even 10% of all software developers, never mind being "best for everyone."


This is why like tabs over spaces. I like a tab to be equivalent to 4 spaces. But my coworker may prefer a tab to be 2 spaces. Another may prefer 6 etc.

Tabs let each individual render the code to their liking while keeping everything in sync. Using spaces forces everyone to one person's preference.


Is that why I find embedded programming the most fun?

I have been diagnosed with ADHD and had a really bad score on working memory. With embedded programming, there are usually few to no libraries and all I have to do is write certain values into certain hardware register addresses to get stuff done. All the relevant information is always on my screens, allowing my working memory to be occupied only by the problem or task I am working on.

I also find myself building everything in little modules that have interfaces that make it hard to misuse them, because I WILL forget how to use them and just let my IDE remind me of what functions a module has.


What ide are you using? Also what are you using for the build? Cmake? Just curious. I found in general most ides don't work well for embedded. There's always some really annoying flaw.


Not OP, but I write in C for embedded platforms.

I use Vim and I write my build scripts in Python. Of the half-dozen dependencies I'm working with at the moment, half use GNU Autotools and the other half use CMake.

I know my targets and their respective toolchains in advance, so the automatic feature & quirks discovery of e.g. autoconf doesn't help all that much for the code I write myself. Plus, the targets I work with nowadays all have a compiler that's C11 and GCC-compatible to some degree, so my approach is generally to target ANSI C89/ISO C90 to the greatest extent possible; pulling in C99, C11, and GNU extensions only when necessary. I find myself reaching for <stdatomic.h> more and more nowadays, FWIW.


I just switched back to "never combine" on my Windows 10 taskbar after a lawyer complained on Reddit about this feature being removed in Windows 11. Said he has like 15 word or pdf files open at a time. He can see the title of all of them without navigation. I know that feeling.

Never combine seems nicer than navigating a stack of 20 browser windows (the browser icon) on my stack of 20 programs on the taskbar that I have to swing my mouse pointer around like a neanderthal just to look for where I need to swing it to next.

If it's a browser specifically, I have to look for the right tab of the 20 non-merged tabs too. That's 3 neanderthal grunts with the mouse to find where I was 15 min ago.

Idea: Why not stack the browser tabs too so it's all in one place.

When the windows aren't combined, I like that I notice when the taskbar starts looking like sardines. Otherwise I don't notice. Hours later, I have 40 browser windows open when my computer chugs upon clicking the browser icon. I don't realize until it's too late because I use alt-tab most of the time.

I think UI design is getting worse in almost every way and this is just a symptom of it.

Wayland brought back some interest in window managers[1]. On Windows, holding Windows key + arrow keys to snap windows is one of my favorite features. They stole that idea from the tiling managers. It's useful! I'm surprised Microsoft didn't remove it in Windows 11 to make their UI all floaty like a Mac.

On any Apple device, I have to drag my aging neanderthal hand over a touchpad with the delicacy of a butterfly to go "warp speed" between two windows.

Anyways thanks for reading my impromptu ted talk rambling on navigating the electric box under cognitive load.

[1] https://wiki.archlinux.org/title/wayland#Compositors

edit: clarity + format


Oh. So it's not just me then.

For years I thought I was missing out by not having acclimated to modern IDEs like JetBrains or Visual Studio Code. I find features like autocomplete aggravatingly distracting, and when I want to do something the IDE or a plugin didn't account for, it often becomes a manual process.

By contrast, with Emacs, I can simply omit the addition of features that distract me from the code, and I can automate away manual workflow pain points with a few lines of Emacs Lisp, evaluating them right in my running editor and saving them off.

But I wondered if that was some sort of Bruce Tognazzini "your brain is gaslighting you, and you're leaving productivity on the table by not using the objectively superior UI" type phenomenon. Based on this research... well, I guess not. Comforting to know.


Finally some research on ADHD and UI. I have it and have always found GUIs confusing as hell.

Most people seem to find Windows an intuitive OS, for instance. Not me. Windows up to and including XP felt pretty intuitive, but since then every version has gotten more confusing. I never know where to find stuff, how to navigate around. I'm sure there are visual cues that I'm just not capable of processing. Icons for buttons mean nothing to me. Text does. So the design of HN for instance is wonderfully easy to understand because almost everything clickable is just plain text.

Something like Microsoft Word is completely mind-boggling to me. Both before and after "ribbons".

I would love some sort of global setting for GUI frameworks to replace all icons with their alt text or something.


This is why I’m convinced vim is great for the ADHD brain. You don’t need to remember where something is or what it looks like, you just have to know the name of what you want to do, and you tell the editor to do it!


whoever created the UI/ UX in MS Teams needs a good talking to

MS Teams overloads my brain through very bad design


I hate Teams and wish so much there was a better client for it. Why does is the "teams" pane different from the individual chats pane ??? Do I just have to deal with the sluggish UI switch every time I want to chat with my team? do I not understand how it works or something?


No, it's just bad. Microsoft rushed it to market so Zoom wouldn't eat them alive. Then they just stopped caring...


I detest the modern trend of "grey-on-grey" hieroglyphics. Everything melts into some amorphous blob of whitespace with some scribbles on it, and you just have to magically know or painfully learn that this little scribble means "Sharpen" and this one means "Smudge". And god forbid we add a little color! Or detailed icons where I can look for an actual shape of something like "blue folder with yellow splotch", which my brain reinforces as meaning "Sharing Preferences" every time I use it since it says "Sharing" right underneath it.

If you download Pixelmator Pro on a Mac, on first launch (controlled by preference key `toolsOnboardingCompleted` in the plist), the default toolset on the right side of the window is a row of small, grey, hieroglyphics , but with textual labels off to the side. The moment you select one, the labels disappear forever to "make room" for a vertical pane with settings for the tool you have selected. Which, thank god, it's not like I have 2880 horizontal pixels to use.

You are given 1 single moment to learn what all these stupid tiny grey blobs mean, otherwise, you get to wait 5 seconds for the tooltip to appear to let you know. Hence, I never use Pixelmator until I absolutely have to, because it's a load of ballache to use. The people studying UX academically know that text labels are good. But muh SiMpLiCiTy! Muh MiNimALiSm! It's gotten absurd.

FWIW, if you're on macOS, and the application you're using is displaying an NSToolbar, you can typically force it into "Icon and Text" mode even if that option isn't displayed in the context menu on the toolbar by using PlistBuddy. Find the app's preferences `.plist` file, then look for a Key called `NSToolbar Configuration someUUID`. That key's dict will have a key called "TBDisplayMode". Setting it to 1 will force it into Icon and Text. Setting it to 3 will force it to Text Only. For example, I can force Pixelmator Pro to show me just text labels on the top toolbar of the window (the actual tool palette is some other likely custom UI component) by running:

  /usr/libexec/PlistBuddy -c "Set NSToolbar\ Configuration\ bigLongUUID:TB\ Display\ Mode 3" /homedir/Library/Containers/com.pixelmatorteam.pixelmator.x/Data/Library/Preferences/com.pixelmatorteam.pixelmator.x.plist
Then just quit and relaunch. The key's possible values are in the enum here: https://developer.apple.com/documentation/appkit/nstoolbar/d...

This is one of the minor but "big in my mind" reasons I despise most web-app-in-a-box apps: they almost universally do not use CoreFoundation APIs to get preference keys from a file, which means none of what I sussed out, tested, and applied in the 10 minutes it took for me to hunt these keys down and find the documentation works.


> participants solved mentally active programming tasks (coding) and monotonous ones (debugging)

Am I the only one that finds debugging not monotonous at all? Often the programming feels more monotonous. I'm not sure if that's because I feel like I'm learning something when I'm debuggging or that a bug is a puzzle to solve, I feel like a detective and a scientist figuring out things about my small part of the universe. In comparison programming feels like playing with sand in the desert. Where you can do everything but it's hard to decide what and where and ultimately whatever you build doesn't really feel impactful. It's still a desert and those are still just sandcastles.

I might have ADHD, I suffer from a lot of the same inconveniences at similar intensity that ADHD diagnosed adults suffer from.


Maybe it's "stupid parser error debugging", where a single missing comma in a csv, or a missing semi-colon in a long one-line shell script, causes a bizarre, misleading error message.

Something where your tools mislead you because the input broke all the assumptions that were made.

In my experience I then have to resort some quick slice-and-dice work in a text editor to bisect the problem, and I'll eventually find it, but it is tedious and not fun. And your reward is usually learning "the input was wrong", not "the code is wrong".


https://news.ycombinator.com/item?id=36722157

I just wrote the same comment. My guess is that their debugging task wasn't really representative of real life debugging.


If you read the study it’s reading through python code trying to spot and fix indentation errors. It’s monotonous.


This rings true to me, I’ve tried so many times to pick up a more “professional” IDE as that’s what all the proper software devs at my workplaces have used, but I just always come back to a completely pared back emacs and a separate command line window.

I’d be interested in seeing if certain languages with extra syntax / symbols have a similar impact, or even other software stacks - the aws console, a noisy ide and c++ all give me the same feeling.


"After that, participants solved mentally active programming tasks (coding) and monotonous ones (debugging)" ... this is a surprising take. Debugging, as the saying goes, is often like a murder mystery. Edit: I dont' think the authors are wrong about that, since they have observed the participants, I assume they chose a monotonous debugging task.


I’ve been surprised by this too.

Maybe it’s a personal preference after all but my ADHD brain for sure prefers debugging.

Debugging is sometimes more challenging and the boundaries are clearly defined : you know how the program should behave and you know when you get it. You know what doesn’t work so it’s easy to go TDD : when the test is green you are good to go !

Whereas writing new code is a pain for my dopamine system because I never know when it’s done.

Getting the boring feature to work is easy but finishing is horrible.

Figuring if you managed every edge cases, if you wrote enough tests, if you respected the team defined architecture, that’s hard.


> Maybe it’s a personal preference after all but my ADHD brain for sure prefers debugging.

> Debugging is sometimes more challenging and the boundaries are clearly defined : you know how the program should behave and you know when you get it. You know what doesn’t work so it’s easy to go TDD : when the test is green you are good to go !

> Whereas writing new code is a pain for my dopamine system because I never know when it’s done.

Oooooh, that fits with my drifting towards debugging other's code (even when I have 0 experience with the language) and debugging infrastructure configuration.

I sometimes think I may have some ADHD traits but I don't believe I have it. Though I recently decided to adopt some ADHD strategies to organize my home/life and it has had some effects/benefits.

Maybe I should pivot harder away from coding and drift towards sysadmin.


As I was writing in another comment, I have a very clear deficit of attention and I love debugging! Especially when it's urgent, and I'm basically livecoding in front of other devs or clients, jumping about in our codebase, hacking through breakpoints and the interactive console.

It helps that JVM debugging is very good (can be done remotely over the network, can insert code, add conditional breakpoints, suspend single threads etc.) It's an almost lisp like experience!

And a bug report is the most scoped task I ever encounter frankly. You have a clear success condition, often a pretty clear deadline ("now!" or "before the release window"!) In fact, if anything I tend to take support tasks way too often, to the detriment of my feature building work.

When I get an email with a title like "Intermittent null pointer exception in running prod application", I just know I'll have a really good few days!


From what I skim read the debugging task involved finding places where the wrong type of indentation was used (python)

So I'd describe it as pretty monotonous, even if most debugging isn't!


Oh that explains it! I never have to do this, because computers are very good at it.


So a low frills just the code IDE is better for ADHD. Wonder if single monitor and even single code pane at a time also boosts an ADHD brains productivity. Suspect it does.

Another aspect of focus is fixation on a specific physical space with lowered blink rate. As soon as you move your eyes away or blink it lowers your concentration. The text outside of this cone of sight is less sharp. It's like a hunter that finds its prey in a landscape. Seems obvious when you think about it but we rarely consider this when designing our office and workflow.

Edit: love all the replies to this, lots to consider that could really help.


Yup, even with a giant 4k display, I have only a single tab/window open.

2 windows side by side tops.

I found that two displays of different sizes better as well. The smaller is clearly the secondary.

I wish that an easy way to switch all windows on different displays in the same workspace is available.

I tried to program myself some combos with Gnome and Pop Shell shortcuts macros, but it's not quite there. It's only consistent on the same display, same workspace, and that is not enough to drain some attention spared on secondary stuff, like the documentation/test window while I'm writing the code.

The closest I got that is consistent is to have two tmux sessions in different displays and coordinate a simultaneous change of sessions of both displays. But that doesn't cover GUIs.


My natural workflow is everything full goes full screen. It’s sometimes painful on Windows but on Mac, it’s a breeze to switch full screen apps with the trackpad.

But I’ve got the combo of myopia + ADHD so I naturally prefer little screens (like 13 to 15“). I acknowledge that everything full screen on a 30" screen is pretty painful.


Oh yes that is definitely the case for me. I can have laser focus when I'm working on my laptop with just one window on foreground at the time, and I get distracted a lot on my comfy dual monitor desktop.


Give a try on displays of different sizes, like a 13/15 laptop with 27 external.

The best I found is 21 and 27, just need to find a decent 21 modern replacement that is not low contrast LCD.

Even better if you can dim the brightness of the secondary display.

With the notebook display I can just use the brightness shortcut of my keyboard (both on Logitech MX or my Keychron one).


I typically find I do better with a single screen when I’m able to work entirely within an IDE.

Once I need to visually see what I’m building or reference documents, it's helpful to have to visually referenceable.


Anecdotally, as a person waiting to be tested for ADHD, I've always removed as much as possible from my editor's interface. Flashing lights, spinning icons etc. are the worst.

Interesting icons that I never use also gotta go!


This is why I use nvim with minimal plugins. It is hilarious to me that an IDE has a distraction free or "zen" mode. Like that is the mode it always needs to be in my friends! Who wants a "full of distractions" mode on the tool they use to get work done.

Also, why would I need an on-screen button for something I can just do with the keyboard I am already touching?


Anecdotally as someone with ADHD, one of my favorite features in Jetbrains IDEs is that I can double click on the tab / filename to hide everything irrelevant to what I'm looking at, and repeat to bring it all back when needed in the same state that it was in previously.

When I'm doing like a root cause sort of search by using the find window (via Ctrl/Cmd + Enter) and pulling up individual results, and using other tools such as the explorer / git history / terminal / whatever else. If I need to focus on the code I can temporarily hide everything, then bring it back up when I'm ready to move to the next result.


There's also a focused "zen mode" for VSC.


There's also a video presentation JetBrains Research have published alongside this paper:

https://youtu.be/ris_UxYMn_Y


Thanks, the article is paywalled, so this link is very helpful


You can find the paper on arxiv.org if you're interested.

https://arxiv.org/abs/2302.06376

https://arxiv.org/pdf/2302.06376.pdf


It's funny, I just spoke about this in another thread (https://news.ycombinator.com/item?id=36691047) recently.

These findings don't merely apply to coding, they're applicable to any type of focus-driven work. Case in point, I use VSCode to type all of my work-related research and writing drafts instead of using Word for the simple reason that the Word UI is cluttered, distracting, and makes writing difficult. By comparison, VSCode is so much more pared down and lets me focus on what's actually important - the content of my writing!

Is feature-creep the inevitable fate of any software environment? As a product gains momentum, I think there's always going to be pressure to add new features to either justify monetization or user retention/growth (and, more cynically, to justify what an employer is paying a team who is staffed on that piece of software).

I do think that FOSS is more immune to this problem. There's no incentive to grow or monetize, and so you end up with software that can be feature-complete for its purpose and left relatively alone (the downside is that there's a lot of half-baked FOSS out there as well!). Hopefully when feature-creep takes over VSCode, someone will fork it and continue the project as-is.


Could you imagine vim trying to push some sort of paid subscription or social network integration?


I have pretty bad ADHD, back in the day when I was flipping between editors, I had to carefully catalogue what I want from an IDE and then force myself from ignoring everything else. Here is what my list chiefly has:

0. Reasonable Vim key binding support

1. Linter should highlight as I type. A compile time checker that can point out type errors(mypy, clint what have you) as I type. A reasonable code formatter that formats on save the code I have edited only.

2. A build button that can run a predefined file(useful to run Android apps)

3. Jumping between functions by mouse click.

4. Search bar to find (a) Any Class by Name or (b) any file by name across code bases.

5. A good debugger that can be run from within the IDE. Though for python ipdb is better by miles from the stock pdb debugger. But different languages have different ways they support a debugger so an IDE can abstract over them and just let me place break points and hit run!

6. Sane auto complete: This is easier for IDEs to do in compiled languages, but for non compiled languages like python I have to sometimes futz around.

7. Lately a chat GPT plugin that can answer basic question without diverting my attention to a browser tab.

I would suggest that if you get distracted and have an inner voice(even if its faint), use the inner voice to build out this kind of a list and ignore everything else.


Meanwhile, Vim and Emacs users be like “what is going on over there?”



Thank you!

No one is discussing the details just their personal experience. I’m not able to read it yet but curious what are the effect sizes? Is faster 1ms or 1 minute, etc.


It's crossed my mind before, that my extremely strong preference for static analysis > tests > debugging might be related to my ADHD

There's something unbearable to me about manually recreating state, waiting for longer-running results as a part of my workflow, arranging multiple processes to talk to each other, etc. Anything where I have to mentally track and manage several moving parts at once. It feels like a huge attention-burden that makes it way harder to think about the problem I'm trying to solve, and makes me more likely to lose context and get stuck in the mud


Too bad the article is not open-access, as I would expect from JetBrains.

And too bad Eclipse's Mylyn doesn't make anyone any money. It's brilliant for this.

Extra cognitive load slows everyone. It's just that the effect is measurably distinct in people with executive function (distractability) issues, with respect to speed. The distinction between debugging and coding is not really active vs monotonous but driven by your own ideas vs chasing (a problem). The study isn't realistic, but it's designed to get a measurable result (and to showcase the "efficiency tracking" plugin).

Anecdotally, everyone adjusts their IDE, or accommodates what can't (easily) be changed. Too bad that wisdom is lost and hard to share.

I think the solution here is more configurable UI's, with the configuration being automated/scriptable so that once you've established your preferences, you can replicate them through upgrades, etc.

The most configurable IDE of course is Eclipse (which is in decline because no one gets paid directly to write for it, and it's cheaper to publish a language server for your new language than build an IDE). You can arrange views as you like, change menu and toolbar visibility, change key bindings, and of course add whatever plugins/features you need. If you use Mylyn, you get task-based filter that hides elements not required for your tasks, and highlights the files relevant for that task in a way that can be shared via bug-tracker. You can save view configurations as a workspace and save various preferences. But because components come from everywhere, support for configuration capture varies.

People share their dotfiles for shell and vi/emacs configuration, but not their IDE configurations. It's too bad, because then there would be a configuration population to analyze when raising UI issues.

ADHD and ASD are a broad spectrum. It may help to join the tribe because it validates our experience, but then we can fail to recognize our brain's specific biases. Worse, anyone over 7 has been getting good at compensating, which hides the issue, and our culture of excellence/competition/success == good (therefore failure bad) further obscures with shame, defeat, and self-sabotage. Legal requirements for accommodation help set a global floor, but may also work as a local ceiling by supplanting ordinary fellow-feeling.

For reading fatigue, consider a dyslexia font, e.g., https://opendyslexic.org.


I find it asto ishing that research now is directed at how to figure out what works best for persons with ADHD. This will lead to many breakthroughs.


Finding solutions that work well for people with ADHD is also likely to have the added bonus of making everyone less stressed.


Absolutely, I assume that it's easier to measure adverse impacts on people with ADHD but I expect that reducing perceptual load will improve things for everyone else as well.


It's OXO Good Grips all over again.


Every few years I go through the exercise of thinking "Oh <some IDE> has some cool features, maybe I'll try using it" and that ends quickly when I get annoyed by the clutter and lack of efficiency in operations and go back to vim-- even IDEs that have "vim mode." Even adding IDE like features to vim I quickly dump because I feel like they get in my way.


> "We found that the perceptual load does affect programmers’ efficiency."

Yeah, not a surprise of the century for me. Is there an similar research on interruptions? Especially Slack going ping ping all the day? I wonder how much those interruptions cost the industry. I know they have benefits too.


Everybody's style is different, but for me, I silence... everything. Nothing dings. Nothing pops up. Not on my macbook, or my phone (Do Not Disturb, always), or even my microwave.

For Slack, I watch for a number in the red dot above its icon. Then I know I need to check it. For e-mail, sometimes e-mails just get by me. It's not usually a problem. For calendar, I have to keep a mental note of my next meeting. That is probably the biggest downside of this setup.


I'm similar, and started to just manually set a timer for upcoming meetings on my phone.


It seems that ADHD is trending on HN today: https://news.ycombinator.com/item?id=36719713 or is it just my focus today :thinking:


For you UX/accessibility folks, is there a meaningful difference between "perceptual load" and "cognitive load"?


Maybe this is the time to offer ADHD friendly function. this issue is frequently mentioned on HN.


I couldn't even make it all the way to the end of the title


It is a lot.

"Do coders with ADHD find bloated interfaces annoying?" might be one simplification.

And, of course, we know they can be. To varying degrees, depending upon the type of task the person is doing, the nature of the bloat, and the nuance of that particular persons ADHD symptoms and triggers.

Take away the ADHD factor, and we might still conclude that concentration of focus is impaired by distracting external stimuli, and that what does-or-doesn't constitute "distracting" is entirely context- and nuance- dependent!


Distracting things distract people who are easily distracted




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: