Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: How to prepare as soon-to-be blind developer?
696 points by MathCodeLove on Feb 14, 2022 | hide | past | favorite | 171 comments
Disclaimer: Not myself, but a good friend of mine is suffering from rapid vision degradation and will be fully blind within a few months. I want to do what I can to help them prepare. Anything from software and tool suggestions to general workflow and tips would all be very much appreciated, thanks!



Blind person here, happy to answer any questions. I'm speaking from the perspective of someone born blind, so whatever ends up working well for me might not work as well for someone just losing their sight, though I tried to take that fact into account.

The most immediate suggestions that come to mind are:

1. Learn to use a screen reader. You don't need an expensive one. NVDA on Windows, Voice Over on the Mac and Orca on Linux are the way to go. NVDA is probably the least quirky and easiest to find resources for. I'd recommend against Orca, while it can be used, we all know how tricky Linux on the desktop can get, and throwing a screen reader into the mix doesn't help.

2. Forget about the mouse. Screen reader users use the keyboard exclusively. Try disabling the screen too, many sighted users who practice with a screen reader end up relying on what they can see, which makes things more difficult.

3. Accept that inaccessibility is a fact of life, and it has to be worked around. Not all tools are accessible, and some are more accessible than others. If you're looking for an accessible IDE, Vs Code is great and constantly improving. Emacspeak exists, but I don't actually know anyone who uses it, and I know quite a few blind people in tech. Some things that you're now doing through a GUI are best done via the command line, Git is a good example. Programming tools usually aren't the problem, it's everything else that causes issues. Slack and Zoom work great, for example, but many smaller collaboration tools don't.

4. Not all areas of programming are equally accessible. I can't imagine a blind dev working exclusively on the front end, where there's a lot of CSS involved, and where you have to look at Figma designs and debug issues solely based on screenshots. Backend stuff is much more accessible, same with lower-level systems programming. Dev Ops is very much a mixed bag.

I'm happy to answer any further questions either here or via email, my HN username at gmail dot com.


I'd like to say, as someone who has not been exposed at any degree to those who are blind and use computers/internet; I am very taken aback with the composition and vocabulary you're able to convey in written form having not been sighted. It's truly impressive.


I agree that blind people are impressive in their ability to navigate a sighted world, but it seems strange to find the written word part to be the impressive part... the language part of their brain is fine. I don't see anything they wrote that seems like only a sighted person would be able to understand...


The written word is (kind of) the impressive part. The spelling rules of English are not easy to master, and blind people have much fewer reading opportunities than the sighted. You don't see labels on the products you buy, you don't see the billboards from a passing car, you often don't even read books or articles, as listening to synthesized speech is much faster, cheaper and more comfortable than reading Braille.

The only reason my spelling is as good as it is is the fact that I used to listen to English text with a Polish speech synthesizer for most of my life, as I was too lazy to switch languages. Polish is (almost) phonetically perfect, so I was hearing things almost exactly as they were written. This made my spelling much better than that of most blind Americans. Words like cystom (system), keybored or even polotission (politician) can sometimes be seen in the blind community.


>sometimes be seen in the blind community.

HAH! Love it. Side note: I constantly make fun of the English phrase "I see what you're saying", meaning "I understand", often swapping it out with "I hear what you're seeing", sometimes swapping it multiple times in one sentence. People almost never notice unless I follow it up with "I smell what you're feeling" or "taste what you're smelling", because this creeps them out.


Funny anecdote : A software engineer teammate and I were discussing a change IRL in office standing next to my desk, when there was a churning sound from his stomach (the things we miss WFH). It was not too loud, but loud enough to derail my speech - Instead of saying "<my lengthy explanation..> <stomach_sound> Do you see that?" as a substitute for "Do you see what I am saying?", I accidentally said ".. Do you hear that?". SO awkward and funny


Interestingly in the Netherlands they do express things via taste words. Lekker (meaning tasty) can describe an experience, doing something well and more.

Heerlijk (meaning delicious) can also be used. I found a gravestone with the translation: “we thank you God for your great deliciousness”.


Not quite - "heerlijk" used to mean "lordly", so "heerlijk voedsel" means "food eaten by lords". This also explains the gravestone reference (de Heer == the Lord).


Thanks for clarification! Good to know. I often find it hard to learn and retain the multiple meanings / history of words in a second language. I do know de Heer, and Herengracht etc. I just hadn’t put that together that it’s same origin.


This is also common in other languages. Indonesians use enak (tasty) for almost any positive quality. Even in English people refer to attractive members of the opposite sex as "tasty"

But "heer" in Dutch is either a (human) lord or God, so "heerlijk" means lordly or godly, and over time this became more general.

Much the same word exists in German, "herrlich", but it means "beautiful" or "splendid". Same idea though.


I've mostly referred to members of the same sex as tasty, myself.


These words have shifted similarly but subtly differently in Norwegian too. Lekker = lekker, heerlijk = herlig.


That last example is wrong as translated.

In that context 'heerlijkheid' has nothing to do with taste words, it simply means 'glory'.


Off topic and feel free to ignore but here goes: do you know of any low hanging fruit stuff I can do on web/front-end UIs that will help me get my sites/apps more accessible, that many developers miss? And secondly, do you know of a way to quickly simulate the experience a visually impaired person has without learning/using a screen reader? (As a solo developer, I don't have the time to dive too deep)


If you're on a mac of course the easiest screenreader to use is voiceover, if on windows NVDA - problem there is not wide consensus among screenreaders about functionality, so you will read about a way to do something and think that is the way to do this but when you actually start doing it you find it isn't going to work the way you thought, if at all.

This means if you do anything good you will need to test on different screenreaders, I like https://assistivlabs.com/ for a service to allow you to do screenreader testing also because the developer who runs it is really helpful.

It also means the best ways of doing things tend to be the ones that have been around longest because every screenreader / browser combo has implemented different parts of newer specifications - that is to say it will feel like your chosen screenreader is Chrome, but all the other ones are IE11.

I personally find that what helps you most about understanding when a site is not accessible enough is, after you have read a bit about problems people with screenreaders have, imagine using your site if you were blind.

Because you are blind things happen atomically, you do not have an context clues that you can see about what something is, so actions you can do on stuff should probably say not just what they are doing - like a button that says Delete - but what they are doing it to - like a button that says Delete and has an aria-label that says Delete user account. Really - being willing to sit still for 20 minutes looking at your site and thinking but what if I couldn't see will probably help you find a bunch of obvious problems.

Also when you have a sort of good fundament to your sites assistive usage, it means that things will be a lot easier when you do start testing on different screen readers.

on edit: obviously when I say because you are blind things happen atomically, I mean that in the context of using a screen reader on a webpage


> Really - being willing to sit still for 20 minutes looking at your site and thinking but what if I couldn't see will probably help you find a bunch of obvious problems.

Yes, I'm hoping to acquire this mindset. What I'm building is quite simple, almost crud like in terms of the UI. The issue is I don't feel I get the mindset just yet. I understand speech is linear, and I assume the order mainly comes from how it's laid out in html (as opposed to position). But how do visually impaired navigate the items in this big list, intuitively and quickly? I assume focusable elements are critical, but how about the the rest of the content hierarchy? A row in a table doesn't tell me anything if I can't see the header simultaneously - for instance.


if your table should be a table and is correctly coded as a table pretty much every screen reader will make it usable, in this example

https://www.w3.org/TR/wai-aria-practices/examples/table/tabl...

using voiceover you read the table it tells you the column names, probably in reading a table you would probably first read all the headers so you know what the table is dealing with - here these are first name, last name, company, address.

If you know what value in the table you are looking for you would go to that column, for example last name the second column, then navigate down with the voiceover key combination and the down arrow to get to the row you are interested in - row 2 James I'm looking for sara james so let's go to the first name column, then you could use voice over key combination and left arrow to go to the first name column and see it is Sara, then you read all the contents of that row to find out Sara James, Acme, Inc., 123 Broad St.

Obviously you need to remember the headers in this way of working, so if your tables are really complicated and have a lot of data in each column it might be nice to put a hidden screenreader only message in there that says Contents for First Name Column or something like that. But in this simple case I think it is reasonable to assume people will remember the column headers.

on edit: this is from a developer who is not blind however, so a blind user may have more efficient ways of working than me. Also note, many screen reader users are not completely blind or have some other disability, like dyslexia, which makes using a screen reader useful for them. A dyslexic user may be able to figure out the headers of the table, but if the individual cells have a lot of content the screen reader will be useful for them.


Add Pa11y to your CI: https://github.com/pa11y/pa11y

Another good resource is the GDS accessibility standards: https://www.gov.uk/guidance/guidance-and-tools-for-digital-a...

It's tailored towards creating services within the GDS portfolio of services but there is good advice in there too


Absolutely. One is dead easy, quick, and free.

Step 1: Use a desktop PC. Step 2: Unplug your mouse.

Now learn to use your computer and then your site.

Windows is basically 100% usable by keyboard alone, and that basically is how all blind users use it. They can't see a pointer so pointing devices are useless.

I am Very Old™ and I learned Windows in the 1980s in a company that sold Macs as well as PCs, so it didn't have any PC mice. So I learned to use Windows with keyboard only.

I still can, although it's harder now. But I now run mostly MacOS and Linux.

Windows works very well with the keyboard and I sometimes amaze people by how fast I can move around and do stuff, because I don't take my hands off the keyboard or point and click much. I use the keyboard to run programs, move and rearrange windows, close windows, select and copy and paste, navigate forms etc.

All standards-compliant Windows programs use the same keystrokes to do this and they work everywhere. Learn them and you'll become faster anyway, _as well_ as improving the accessibility of your sites by becoming familiar with how blind users must access them.


> Off topic and feel free to ignore but here goes: do you know of any low hanging fruit stuff I can do on web/front-end UIs that will help me get my sites/apps more accessible, that many developers miss?

Not sure if many developers miss this:

Look at your UI in black/white. Plenty of folks are colour-blind of one kind or another. If your UI works in black/white, it'll probably work for any kind of colour-blindness.

See also https://www.youtube.com/watch?v=-IhQl1CBj9U for something focused on games, but with broader applicability.


Take with a grain of alt b/c I'm not blind and my accessibility training is not formal, just learned from a job.

The built in mobile screen readers are not hard to setup. Enable in the settings and you can play around with it to grok how it works and how to navigate the UI tree with it.

Lowest hanging fruit always seemed to be making sure all the controls you wanted on your page were reachable with accessibility navigation (ie tabindex/focusable) and that the spoken labels were sufficient without sighted context.


So, you are not even a native English speaker then? Oof. You have my respect.


It's really common for educated non-native speakers to spell better than native speakers. Because non-natives learn the language already with a written support, so they have worse pronunciation but better spelling compared to people who learn the language exclusively orally as kids.

My partner is a native speaker and I'm not, yet he often asks me how to spell.


That is super interesting! Thanks for sharing


this reply articulates what I was trying to say.


Makes me wonder if for blind people forms of phonetic typing would be more useful.

Like chord keyboards.

Or maybe some other less wide spread but also less complex to use phonetic input methods?


Absolutely incredible. Thank you for sharing this.


In programming, it's important to structure correctly, format correctly, and spell consistently. This might explain why.


This is a weird compliment. He is blind, not mentally challenged.


I got what he meant. It was a complex reply with many parts and it seems like it would be harder to dictate or type without having that feedback loop of being able to see what you're doing.


Close your eyes and try to produce a complex sentence in your head. It's not that hard, it's easy to imagine being able to do it with a little practice. Especially if you have a screen reader, or play it back with text-to-speech. I'm sure the poster is clever and impressive in lots of ways, but a blind person writing well does not surprise me.


Why though, can't you keep a paragraph of text in short-term memmory?


I already would find it taxing to write a well structured and complex answer like the original comment. I would need to go back and forth, edit, change the order of sentences and the like.

And I am perfectly capable of doing this with my eyes.

But: I would never assume anything about the experience, challenges and abilities of the OP. I was fascinated by the depth of the recommendations and was reminded that I very much need to work on making my sites more accessible.

But I would personally probably refrain from asking if they couldn't just keep a paragraph of text in memory. But that is just me.


I can imagine it would be more difficult, although would presumably improve with practice.


I thought so at first, too. Read mikis reply and you'll see why it was a good compliment.


Ah OK!


A sibling comment mentioned this comment being weird. It strikes me as "weird" too. That doesn't mean it is not relatable. But if you are also struck by what is weird about this comment, you might enjoy taking a look at: https://medium.com/age-of-awareness/inspiration-porn-c08b419... (which links to Stella Young's TEDx presentation).


I worked with a totally blind developer at a top tier tech company. He used a braille terminal. It was really interesting to watch him work.

There’s no denying blind persons face unique challenges, but I’ve been seriously impressed with the fortitude and adaptability I’ve seen them display.

As an aside I wish more developers would think about accessibility. It helps everyone. I highly recommend participating in dining in the dark or a similar program to learn real sympathy for the challenges blind persons face and to see how capable they are of overcoming them.


I've tried using a screen reader and the best way I can describe it is that it is like using a computer through a telephone menu. You hear things described to you at a pretty quick pace and you need to mentally build a model of what you are doing. I would routinely lose mental focus of where I "was" and would be completely lost.

I'm sure this isn't as difficult for people who use these tools day-in and day-out. But it was an eye opening experience for sure.


One of my computer science professors was old enough to remember his undergraduate on punchcards, running stacks on shared mainframes at night.

In his words, "You got really good at checking your code, because if you made a mistake, you had to wait until the next evening for another time slot."

It stuck with me because it boggled my (privileged) mind how adaptable people and their brains are to just about any scenario you can dream up.


Even just thinking of how much sighted developers benefit from adding a second or third display to our setup— what is the benefit of added desktop area? Additional context, of course, being able to have lots of things on the go at once! And then you unplug from your dock and all those windows you had spread out are now crushed together on the single laptop screen.

It boggles my mind imagining the discipline that would be required to maintain all this state purely in one's mind.


The juxtaposition of phrases like "eye opening experience" to this subject matter make me realize how many sighted metaphors we use as a matter of course. I don't know whether or not that matters, but they certainly jump out at you in this context (another metaphor...)


Metaphors are not just prevalent in how we speak, they are central to how we think.

The book https://en.wikipedia.org/wiki/Metaphors_We_Live_By is a seminal work on this.


As an aside, while I can’t speak for the community as a whole, all of the blind persons that I’m personally acquainted with don’t mind visual metaphors so in my case at least I don’t worry that I’m giving offense. I agree it is striking though how easily we use visual metaphors.


Almost every sentence you can produce will contain some kind of metaphor, or metaphorical use of an originally concrete word. And since vision is one of the most salient senses for almost everyone, it's not surprising.


Braille hardware seems pretty expensive for what you get. Last time I looked most of the devices costed $800+ USD for a single row of 24 letters.


Many (most?) states offer grants to help pay for adaptive hardware. And I’m completely certain any company worth working for will spring for it too.

I’m a sighted person and I’m not going to pretend I know what the right technology for any particular blind person is, but computer braille really is fascinating.

Edit: I’m not a hardware guy, but it feels like an 80x24 or larger computer braille display is well within the capabilities of today’s hardware hackers. Imagine a reverse mechanical keyswitch with 8 dots that can be put on a PCB of arbitrary size in whatever configuration to understand what I have in mind. There’s probably not much money to be made, but someone with a passion and existing financial success could really make a difference.


Speaking as someone who was born blind Braille may not be worth using if you went blind later in life. I was forced to use it heavily until about 10 or 11 in spite of the fact that I wanted to use text to speech since it's quicker. It was totally worth it since it gave me a basic literacy foundation that enables me to type out sentences like these and be able to spell well enough that less then 1/5 of everything I type has to be corrected with Spellcheck. I have an 18-year-old braille display I got in school when I started programming more. It still works but I only use it once a month if that. I do everything with text to speech. The only time it was really useful was when I was doing assembly programming and had to look at hex values in a debugger. If you go blind later in life it may be pretty easy to avoid learning anything but enough braille to read signs for room numbers and bathrooms and do just fine.


Wouldn't a large braille display be very useful for getting an overview of something? When consuming a big chunk of material, audio is very convenient, but being "linear" seems to make it quite impractical when you need to jump back and forth, or get an overview.

I'm particularly bad at piecing together a larger context from smaller pieces, so maybe it's easier for others, especially if they are forced to?


Large Braille displays basically don't exist, they'd be too expensive.

Navigation with a screen reader isn't just next item / previous item, though. There are hotkeys to jump between headings, links and so on. That's why good markup is so important for accessibility.


No sure, I didn't mean that navigation itself was hard, more that it's hard to know where to navigate to. If you can see, you can quickly scan a page and get some sort of understanding of what is where, but at least with audio that's very hard.

It seems that it should be possible to make cheap full screen braille displays, I suppose it's just that the volumes are not big enough to drive down prices.


The issue is that to move 80x24 dots independently you need 80*24 physical actuators (or an enormous mechanical multiplexer), which is very expensive. Industrial looms and knitting machines have that but nothing much else does. However, the technology to solve computer braille without needing width*height mechanical actuators actually exists - the fluidistor, or liquid transistor - but is only just now getting serious development, and it's mostly in haptic feedback labs.


If you want 8024 characters, you actually need at least 8024*6 dots, each Braille character, also called a cell when Braille displays are concerned, can consist of up to six dots, eight in computer Braille.


In my experience, anyone with any kind of physics background thinks that Braille displays could be radically cheaper.

Anyone who spends a couple hours on researching the problem thinks that the price could be improved, maybe by half, but that the problem isn't as easy as it seems.

Anyone who tries building one eventually arrives at the conclusion that making it cheaper requires a bunch of tradeoffs which they're not willing to make.

As far as I know, the central problem off Braille Display engineering isn't making the dots, but putting them so close to each other. If the gap between dots would be half an inch, let's say, a braille display would probably be a hundred-dollar affair. It would be completely unusable for any kind of reading, though.

Orbit Research managed to find a way to make Braille displays somewhat cheaper, which is now incorporated into the Orbit Reader 20 and 40, but that technology has a much slower refresh rate than traditional Braille displays and causes them to be much louder.


> I can't imagine a blind dev working exclusively on the front end

That’s unfortunate! Clearly blind people are the most qualified to work on one specific frontend job: accessibility.

The fact that it’s extremely hard to automate accessibility testing in a meaningful way would make it a full time job at most places that take accessibility seriously.


Yes and no. Screen reader testing is extremely important and that's something blind people do well. However, if there's a link on a website that the screen reader cannot see, a blind person won't report that as an accessibility issue because they won't even know it's there. Other things, such as testing with Zoom or high contrast mode, also require sight.


> if there's a link on a website that the screen reader cannot see, a blind person won't report that as an accessibility issue because they won't even know it's there.

This is highly variable. As a screen reader user who works in accessibility with a software engineering focus, I don't consider a test to be complete if I've only evaluated what the page exposes to the accessibility API. Assessing the rendered DOM by hand, testing on different viewports where controls may be slightly or completely different, etc., are just as critical to the testing process as trying to simply use the page. But, I recognise that there are many accessibility testers out there without such a technical focus, and it is true to say that they may gloss over something if it is completely missing in the accessibility tree.


If your test is broken if the test script leaves out the button.

Sighted developers and QEs will miss testing elements if they aren't aware they should be there.


Most tooling and test sites are in practice at least slightly broken.


I'm talking about a level playing field between sighted and blind QA, they are both subject to missing problems if the test is poorly designed. The fact that "most" tests are broken is related but tangential to my observation.


How do you deal with computer troubles? If your OS fails to boot, are you basically stuck getting help?


On the PC, yes, essentially yes. Using a mobile app to OCR what is on the screen (via the phone camera) sometimes helps. There are also some apps[1][2] where sighted people can connect to us via video call and help us with whatever we need at the moment.

On the Mac, this is not an issue. The whole recovery environment is 100% accessible, at least on M1. I think it was more problematic on Intel, but you could still figure it out.


Thanks for sharing your experience. I'm curious how changes in graphical user interfaces of programs and operating systems over the years have impacted your workflows. Do you develop on Windows or the command line?


I'm a Mac user these days, before that, it was Windows with WSL, Notepad++, Windows Explorer and remote VS Code for bigger projects. I use the command line a lot, but some things are best done with a GUI. I'm not a fan of command line editors, for example, as screen readers don't always deal well with TUIs. They can be used, but they bring their own set of headaches which I just don't want to deal with.

Updates definitely affect your workflow, you need to be careful about what updates you install. This is less of a problem with coding-related tools, but definitely a nightmare when mobile apps are concerned, for example. Web apps are even worse, as you can't avoid updates that completely break accessibility. I've heard stories of people getting in big trouble because a web app that was critical for their job suddenly had a UI overhaul and stopped being accessible. OS updates can cause trouble, in fact, Windows constantly breaking things with their forced updates was one of the major reasons why I switched to Mac OS. It wasn't even accessibility-related, most of it was basic stuff like sound.


Which OSes and devices have the best accessibility in your experience?


This is not an easy question to answer.

iOS and Mac OS definitely win in terms of what's built-in, the screen reader they come with is more than enough.

JAWS, a paid screen reader for Windows, is perfect for enterprise environments, as it works well with Microsoft Office and Common productivity applications and lets IT administrators enforce security policies. At also has a vibrant market for scripts, both free and paid. Those scripts add accessibility to third-party apps. Companies can even hire script developers to implement accessibility for the internal apps employees need, which is often important in big corp / government, and it is in those environments where JAWS thrives.

NVDA, the free Windows screen reader, is more suited to software development work than JAWS in my opinion. It's written in Python, which makes it much more flexible, but also much harder to learn script development. There's a big collection of addons and plugins of all kinds, from speech synthesizers and braille display drivers to full on remote access suites (which no enterprise security policy can block), but all of those addons need to be released as GPL, which discourages any development on addons for internal and professional applications. In many countries, NVDA has replaced JAWS entirely, even in the enterprise.

Narrator, the built-in Windows screen reader, is getting better with each year, but it still isn't on par with other solutions that we have. It's great for installing another screen reader or figuring out why your existing one doesn't work, but that's about it.

Linux is a shitshow, free software zealots will claim it's perfectly accessible, while many problems still remain. It can be used, but it's probably the least accessible out of the big three, at least when the GUI is concerned.

Android works, people use it, but iOS works much better. IPhones are much more popular in the blind community, even in countries where they're extremely expensive, even considering the fact that blind people are often unemployed or have low-wage jobs. Android has quite a few rough edges and thinks might not work the same way on different phones. It's been getting better over the years, but even something as simple as accessibility on first setup isn't guaranteed, something that Apple has basically figured out more than 10 years ago. It's much more common for an Android phone or a Windows PC to fail in a way that requires sighted assistance than it is for a device running iOS or Mac OS.


Because you mention it a couple times, do enterprise security policies negatively interact with accessibility in an enterprise environment?

My experience with corporate IT has been that they're not great at, or under resourced for, any outside-the-box situation.

But since accessibility software is a legal requirement, at least in the US, I'm curious if that breaks the tendency to inaction.


It depends. Where I live, this usually doesn't seem to be an issue for some reason. I guess IT administrators don't really know how powerful a screen reader is. Just to give you an idea, it would be trivial for anyone with basic Python experience to weaponize NVDA into a key logger and a screen scraping spyware, without admin rights, and no antivirus would complain.

In the US, this seems to be much more of a concern, so JAWS is often a requirement in big corp / government.


IIRC, accessibility hooks in Windows are very, very low level. Thus they bypass most security policies. Also IIRC, there’s been multiple “wontfix” “0-days” abusing those hooks and fixing them would basically brick Windows for people using accessibility tools.


Do you have experience with Google docs? I’ve heard that it is more accessible than Dropbox Paper and am interested to know if that’s been your experience.

We have a heavy documentation culture at Stripe so as a developer these tools matter a lot.


No idea about Dropbox paper, but those tools are hard to make accessible, so I wouldn't be hopeful.

Google Docs works, there are two accessibility modes, none of which is enabled by default. The old, standard mode hides all content from screen readers and uses a built-in micro screen-reader that outputs whatever is needed to a live region, which your screen reader then reads. The newer one, called braille mode, actually shows you the content, only relying on ARIA when absolutely necessary. Both have their pros and cons, but I'd recommend using Braille Mode for most things, unless you run into use cases that the legacy mode handles better.

I've heard people say that some really advanced docs features have accessibility issues, but I haven't used it that extensively to confirm that.

As always, Etherpad, the open source / non-proprietary / privacy-friendly alternative boasts about being accessible but has so many a11y bugs that it's basically unusable. As always, markdown over git works better than any web-based solution ever could.


I have my gripes with Google, but from working there, I can definitely give them props for how much they cared about accessibility. It’s in their mission statement, and they take that seriously.


> I have my gripes with Google, but from working there, I can definitely give them props for how much they cared about accessibility.

Accessibility at Google suffers in the same way as most UX-related things suffer at Google. Namely, the fact that everything is constantly reinvented from scratch, rather than there being one unified way to do it. As a screen reader user, I can say that in some Google products, there can be instances of what, on the surface, should be exactly the same component, but was apparently developed in multiple different ways. This leads to the constant need to work out how accessible each instance is (and e.g. what keyboard support it has), even though I dealt with the same UI pattern minutes ago.


This is all fantastic! Thank you so much!!


> Dev Ops is very much a mixed bag.

Have any examples of issues while doing DevOps? For that type of work, in my own perception, I deal 100% of the time with text (be it error messages, or automation code).


One issue is the inaccessibility of graphs. I wonder if miki123211 had anything else in mind.


That is one thing. Other issues might include having to use VNC or KVM, no sound by default on Windows server, captchas in the AWS cloud console etc. If you're doing normal Linux server administration via ssh or dealing with Terraform, Docker, Kubernetes or other command line tools, you should be fine though.


Has anybody tried to use mouse to 'echo-locate' parts of the screen? Bats, with far simpler brains, can see complex scenery and prey in real time, so I'm sure a small portion of the visual cortex could figure echo location. The idea is to generate sound based on close surroundings of the cursor, in real time. To an inexperienced user, this sound would be meaningless, but witj practice that sound would let the user see.


Why would that be better than using a keyboard to navigate between discrete units on the screen? There are a set number of clickable points on the screen, why make a game for the blind user to try to echo-locate those spots instead of just letting them cycle through them? The point of a screen for a sighted user is to make a concise way to transmit existing information... it is only a good way if you are sighted.


Well, how do you screen-read a geometric shape?


how do you echolocate a shape? It seems you are suggesting some mapping of a 2D visual scene to audio. If that is useful in practice, it is one of many possible interfaces that blind people could use. It wouldn't supplant screen readers.


Third hand report, but apparently academics have tried to simulate what an echo locating person would hear from a 3d world, but not yet managed to convey the experience to a blind person over audio. IE there is some effect that they haven't captured yet

So Doom for echo locators not yet possible.


Seems like to even be possible you'd have to have a really good model of each users ear to figure out how to use headphones to recreate the effects that has on how sound is localized.


That part is old news, it's called Head Related Transfer Functions and even using a general HRTF works quite well. I presume you've tried Spatial Audio with airpods, or some similar system?

https://en.wikipedia.org/wiki/Head-related_transfer_function.


I've heard of HRTF but dig the name out of my memory to mention it. I'm not sure those are good enough for faking ecolocation though. Don't have an iPhone or EarPods to try spatial audio.


You can locate sounds within 20-30° or so, which is pretty good for navigation at least.


Maybe for slow narrative games but a game with any action like The Last of Us it sounds unusably imprecise and slow, and TLoU is a fairly slow game if you're playing it stealthily.


I’m mostly thinking about real life navigation, when I worked on this the idea was to help blind people get around. But I’m sure lots of interesting games could be made using the technology.


There have definitely been attempts for mapping the real world, but the ones I've seen have been quite primitive. I started my own project around this, but I never had the time to finish anything useful.


Do you play computer games? What are some good blind accessible ones?


Go to audio games.net. That's a whole website dedicated to accessible gaming.

From the more mainstream ones, The Last of Us 2 has famously implemented accessibility. I haven't played it, as it's one of the only games for the PS4 that is accessible, but people are saying that it almost plays itself when accessibility is enabled, taking away part of the enjoyment.

Other mainstream games are getting accessibility mods, Hearthstone has a really great one. Audio Quake, an accessibility mod for Quake, was also quite popular back in its day.

Text based games, from Infocom's interactive fiction to MUDS and MOOs are also pretty accessible, a significant portion of people still playing them are blind.


Can confirm! I run a company that has five MUDs, and we have quite a few blind players as well as at least a couple of blind employees. I'm not visually-impaired, but I like knowing that our players who are are "seeing" the same game world in their heads as fully sighted players, or at least seeing the world from an equal perspective.


> but people are saying that it almost plays itself when accessibility is enabled, taking away part of the enjoyment.

I don't envy anyone trying to solve this problem, seems next to impossible to do a live action game above a certain level of accessibility. Just the time you have to communicate what's on the screen (much less being able to quickly communicate what's on screen) is so tight on a shooter for example. Found a neat video [0] of TLoU2's accessibility features.

[0] https://www.youtube.com/watch?v=GHN5v3NJ9ko


> Just the time you have to communicate what's on the screen (much less being able to quickly communicate what's on screen) is so tight on a shooter for example.

There's an episode of Person of Interest that provides at least a possibility here: "Let's try an ascending tone cue for right, descending for left.". The tone then switches to beeps when on target.

Feels odd sharing a youtube link in the context of this post, but if anyone wants to see/hear the scene I'm talking about it starts at 1:21 here: https://www.youtube.com/watch?v=cHIo96yBf70

(Just for a bit of context the character doing this isn't blind, but these cues are helping her shoot people through walls. The episode is the second season finale, titled "God Mode".)


At a certain point it sort of feels like you're creating a different, parallel game that hangs off of the "main" one. It's definitely worth trying, but I wonder when it makes sense to identify those elements that would make an action adventure/shooting game playable by a blind person and then use just those elements to make a new game that is natively playable by blind and sighted people alike.

I wouldn't presume to speak for others with a different experience from myself (and it does seem [1] like some blind users really like what TLOU2 had to offer), but as a fully-abled gamer with limited time, even I sometimes choose to watch a streamer or even a YouTube summary in cases where I'm interested in the story or whatever but don't feel up for playing a thing myself. I wonder how this option stacks up against what TLOU2 was able to deliver.

[1]: https://caniplaythat.com/2020/06/18/the-last-of-us-2-review-...


Card games like Hearthstone seem like they'd be great options, but the turn time limits sure would be challenging! It can be hard enough to keep up at times when you can take in game state at a glance and take actions in an instant with with mouse.


I would get very heavily into Inform / Interactive Fiction / Old School Text Adventures, for sure


The roguelike genre (on its purist form, text interface with turn-based action) seems would be appropriate for blind people. Roguelike Radio did an episode about that:

http://www.roguelikeradio.com/2012/10/episode-48-designing-f...


When you say that small tools are lacking. I assume this is more than a simple lack of hotkeys, perhaps you can elaborate a bit?


It's usually unlabeled buttons, unlabelled icons, divs that change color on click instead of actual honest to god native HTML checkboxes, inaccessible drag-and-drop, no heading structure which makes navigation harder, weird date pickers that screen readers cannot handle and so on.


Amazing reply.

Is there a community for blind developers? I could see that being a helpful resource.


How do you deal with HN itself? The site is remarkably inaccessible.


HN's accessibility issues can be worked around. The only big one that's actually pretty annoying is the lack of hierarchy, it's hard to figure out who is replying to whom.


Do you have an example of a website with a comment hierarchy that works well for you? I'm pondering writing an alternative frontend for HN with accessibility in mind--I don't have experience with that, and am not affiliated with HN, but am willing to see if I could make this work.

Would you also appreciate an easier way to find replies to your own comments?


> Do you have an example of a website with a comment hierarchy that works well for you?

I'm not the original commenter, but I am a screen reader user who works in accessibility.

Unfortunately, I don't have too many good examples; the problem of hierarchical commenting systems being difficult to navigate is common across the web. There is a Reddit client for iOS, Dystopia[1], that does this extremely well for users of the built-in screen reader, VoiceOver[2], by allowing entire threads/subthreads to be collapsed and/or skipped over. On the web, you'd want to look into using hierarchical headings, nested lists and the like, to allow the structure to be conveyed semantically. HN is inexcusably bad at this, as there isn't a single heading anywhere on the site.

[1] https://www.reddit.com/r/DystopiaForReddit/ [2] https://support.apple.com/guide/iphone/turn-on-and-practice-...


Not a website, but Dystopia for Reddit, an iOS client, does this well. It just announces the level of each comment, where direct replies to a post are l1, replies to those are l2 and so on. This is probably the easiest way.


Is multiple monitors a thing for blind people? I’m wondering if there’s a difference between switching between windows and switching between monitors.


My one blind techie friend doesn't use a monitor at all. Why would he?


Is multiple screen readers a thing?


Define what you mean. As it stands the question is unanwerably vague.

Yes, there are multiple screen readers.

Screen readers do not need or particularly use a monitor; monitors are for sighted people.

Screen readers read what is displayed at and following their cursor. This means screenreader users need to learn to control a system with 2 cursors: one for entering text, and a different one for reading it.

People asking about multiple screens for blind users are basically asking which colour of monitor bezel blind people prefer. This is not a question. It's irrelevant.

Some OSes will not boot or will not display a GUI unless they detect a monitor, although there are HDMI dummies that can fix that. So, some blind users may have an old monitor connected, just so their OS will boot.


Interesting and all-time top ranking thread:

"Ask HN: I'm a software engineer going blind, how should I prepare?" https://news.ycombinator.com/item?id=22918980


I'm still around :)


How have you adapted?


I moved to the Netherlands (much more accessible and I get to travel to places I've always wanted to see). I also started my own company, the end goal being that I hire some folks and grow it into a larger consultancy where I can work more as an IC/manager.

Before moving I spent a lot of time working with various blindness organizations in the states...they move very slowly and unfortunately I got basically nowhere before moving. Going to pick that up again here though.

I'm still working on making changes to my dev workflow, but besides a good screen reader I'm not sure much needs to change in that respect as the tools I use support screen readers.

Overall I've just focused on figuring out a longer plan for my life and focusing on achieving those goals instead of just drifting on a tech salary. This whole thing has really changed my perspective on things.


I remembered seeing a thread here about it, kudos for finding/sharing it!


If you don't know yet, check out https://hn.algolia.com for searches like this (not affiliated). The thread linked above is the top hit when searching for "blind".


I found about that website yesterday, good one.


Oh nice! Thanks!


Outside of the dev needs take a look at bemyeyes.com, it allows a person with vision problems to video call a volunteer to help answer questions. I've helped people read labels, sort boxes of chocolate for Christmas presents and help a person find the login dialog after they lost track ofwhere it was. Each call only took a minute or two. Highly recommended to volunteer, only takes a minute to help and you don't have to answer the call if you are unavailable, it just hops to another volunteer.


What a wonderful app. Thanks for sharing, I'm going to sign up to volunteer.


Switch to Emacs immediately. Emacs speak is extremely useful, and since Emacs can do anything (email, web, usenet, chat, edit docs, etc. etc. etc.) once you learn the keystrokes for forward/back word/sentence/paragraph/page/chapter it makes life MUCH easier. I'm not blind, but was a member of a linux user group who got asked to help out. I ended up installing and configuring it for the user, who was a grad student at the local university.


As someone who's totally blind don't switch to Emacspeak. It may have been worth fighting with in the late 90's or early 2000s' before Web 2.0 and when accessibility for major software programs was generally poorer but now your much better off using NVDA as a screen reader, Edge or Chrome as a browser, WSL2 for all your command-line Linux needs and VS Code as an IDE. Other people use Mac's but I wasn't impressed last time I tried several years ago so won't speak to how good or bad they are now. Either way the tooling I just listed has people actively working on accessibility and will have more support resources and active communities.


I use emacs every day and while it's technically true that emacs can do email and web, the experience is awful.

Emacs is great but it's not a panacea. There are some tasks for which it is ill-suited.


Sure, but I think it's by far the best solution in general to a multitude of tasks in a world that's increasingly GUI centric. Generally mouse based interfaces are poor for the blind.

Sure if you find a piece that does a particular task in a blind friendly way go for it, but being a part of the modern world is easier with emacs+emacs speak than any other single tool I know of.


> email

Back when I used email a lot every day for my job, I did all of my email processing in emacs. I'm somewhere else now in a different role, for this I use outlook because it was provided and it is unilaterally worse. The search is worse, the usability is worse and the speed is worse. What exactly about it is awful?

> web

Yes, the web experience is very bad. No argument there :)


I was born blind, so this is just my experience interacting with others who lost their vision later in life. It's likely your friend will want to focus more on counseling and therapy to adjust to the fact that they will be totally blind in a short time. Instead of figuring out if Voiceover on Mac OSX or NVDA with Windows and WSL is better they should get the help they need to figure out how to do all the daily things they used to. How are they going to be able to cross streets, go out to a restaurant or bar on their own and navigate around, etc. How will they match clothes, cook without being able to see the stove or microwave, etc. All these things are possible although with the lack of info on your friend's location I have no idea what government or private support there will be for this kind of rehabilitation service. NVDA with WSL2 and VSCode is what I use for development, Mac OSX and XCode was a garbage fire last time I tried it four years ago. Some of that may have been the fact that I was not nearly as comfortable with Voiceover but Apple made no effort I could see to interact with the accessibility community at that time. My understanding is that iOS development is more accessible with Swift UI but I"m not spending $1000+ for a secondary computer to try it out.


I would highly suggest that your friend contact the National Federation of the Blind. They have a Computer Science Council, and a mailing list at https://www.nfbnet.org/mailman/listinfo/nfbcs_nfbnet.org which is full of Blidn technologists who have been able to be effective at work and in life through the use of assistive technology of various stripes. This is a great place to find information.

I have two very close friends who are Blind, one is an SRE and the other is a data scientist, both use different types of assistive technology but have found that Linux and MacOS tend to work better than Windows, partly due to the availability and usefulness of the command line which is a better UI for a Blind user than a GUI. There are now fantastic screen readers which are free and/or open source available on these OSes.


As someone who's totally blind now that WSL2 exists I'm done fighting with Linux accessibility. WSL2 with the VSCode integration gives me a good GUI IDE and I can use the command-line for anything else I need.


Seconding this. One of my good friends served as an NFB chapter president. They do excellent work. The national convention is educational and entertaining too.

I’m not going to lie though, I do get a bit of a Monty Python vibe from the “rivalry” with the AFB.


> I want to do what I can to help them prepare.

Those are a precious few months, figuring out software can wait, maybe help them enjoy their remaining vision.

If it was me I'd want to see as much of the sights of the world as I could while I am still physically able. Not everyone will have this mind set, but they may be regretful later if they don't.


I was going to say the same thing, but with deteriorating vision they may be already past the point of seeing meaningfully-clear view of the world.


I'm co-owner of a Google group called blind dev works. It was born of a previous discussion on HN.

It's not big and it doesn't see much traffic but you or your friend are welcome to join.

https://groups.google.com/g/blind-dev-works


bruce343434 already linked to https://news.ycombinator.com/item?id=22918980 which I was going to suggest but one other thought I had is considering how supportive their current employer will be. If they're concerned about future employment and a U.S. citizen, it might be worth considering looking at public sector work (e.g. https://usajobs.gov, etc.) because government agencies have a legal requirement to make their applications accessible and a software developer who really understands what it's like for blind users is a pretty rare combination. There is a pretty big gap between “the [fully sighted] QA contractor ran it through a tool” and “someone who uses a screen reader proficiently accomplished a task”, and depending on where you work a relatively minor change can make a real quality of life improvement for a lot of people.



Thank you!


On a related note, does anyone have suggestions for an engineer who will have to use his hands less and less over time?


You might want to checkout Talon Voice.

I have used it in the past for hands free coding, there is a steep learning curve but once you get comfortable with it many things can be done faster by voice than with your hands.

Be aware, using your voice all day can be hard on the vocal cords.

- https://talonvoice.com/


Thanks, that looks interesting. I'll try it out.


Talon is great. I'm personally also using Serenade.

https://serenade.ai/


Thanks for the recommendation.

I'm looking at Talon at the moment but I'm disappointed that it has no clear way to opt out of user metrics and the first thing it does on launch is download blobs from the internet without explanation. It's not clear if it even can work offline at all. For a program that listens to everything you say that is a bit disconcerting (especially in the context of sensitive work projects). I'd like to just pay for something and never have it connect online, bar manual updates.


Talon doesn't need to be online at all. It also does not send audio or transcripts anywhere.. the telemetry can be disabled in settings, and it always prompts me to update on start but never automatically updates.

The blob it downloaded is probably the speech recognition model. I'd ask on the slack if you're concerned, aegis has worked really well with me.


> the telemetry can be disabled in settings

I searched and could not find it anywhere (v0.2.3).


how do you like serenade?

I'm creating a sort of serenade-like system in Talon, since I want it to be open-source and have tight integration with Talon. So I'm curious what the highs and lows of serenade have been for you.


I love Serenade. It has some rough edges at the moment, but the team is working hard on making it better.

What I love is that it feels so natural to use. It understands the context of the code, which means you don't have specify exactly where to move the cursor, or what kind of casing you want on variables. It automatically does what you want it to. I also love that everyone has the same set of base commands. It makes it easier easy to share information and help each other out. Oh, and the browser extension for Serenade is such a life saver. I'm mainly a front end web developer, and it's so easy to navigate the UI with that browser extension as I'm developing.


You can use MIDI hardware as input devices. That means you can use foot controllers with pedals and other controls that might be easier to manage than keyboards, mice and trackpads. There are also purpose-built input devices for those with repetitive stress injuries and what not. You use them to work with, or modify, voice controls.

Assuming by engineer you mean a software engineer who uses a computer all day.


Depends on which field of engineering you are in, but going more towards R&D usually involves less hand work and more brain work. Same for technical consulting.


Not necessarily advice for the person in question but for everyone else reading this:

If you have the ability via your employer or individually to purchase LTD, you should really do so.


What is "LTD"?


Long Term Disability. Basically, if you get a disability that will prevent you from working, you'll be paid X% of your salary for some period of time (possibly until retirement age; depends on the fine print, the disability in question, etc). Many tech employers do cover this; if yours do not it is highly advisable that you invest in some, at the level you feel you'd be comfortable at.


unfortunately this is usually impossible to purchase after you're diagnosed with anything disabling, so don't wait.


Sure. As with every kind of insurance, it will be priced based on your risk; if your risk of being disabled is 100%, the price to cover you would be equal to your actual cost, and so not worth even offering.


That’s the theory. In reality, insurance policies are complex and reading them closely matters. I just re-read the pre-existing conditions clause of my employer provided long term disability policy and I’m pretty sure that if you’ve had the policy for over a year you can’t be denied due to a preexisting condition. They offer the policy to all employees, there is no screening. Therefore, if you have a slowly progressing disease that will result in disablement more then a year out you can still get a long term disability policy that covers it.


That's the case with all employer policies; health insurance is the same way. Because the employer guarantees it for all employees, and has no reason to hire only unhealthy employees, the insurer can cover all the employees at a fixed rate; the risk is balanced out. Even then, as you note, there's a year's lag on that policy (probably because the employer didn't want to cover that extra risk in premiums for every employee).

Universal healthcare also works due to this assumption; everyone is paying into it, so the cost of the most expensive (the sick) is balanced by the cost of the least expensive (the healthy). As the former outpaces the latter, the premiums for everyone grow.

It's also why the ACA in the US mandated participation or a fine; to make it more affordable the healthy need to also be paying into it, but the healthiest also tend to be the youngest, and the least likely to buy insurance, whereas the older and sicker would all buy in (and now have to be covered), and that would make the premiums untenable (leading the moderately healthy to decide not to buy in, causing an even bigger issue, etc etc, until it doesn't make sense for anyone). And why prior to the ACA, you needed a physical, and pre-existing conditions generally led to you being excluded.

But, while the US has changed with regard to health insurance, it still is the case that if you want LTD as an individual, expect to have questions about your health, if not an actual physical, just like life insurance does, and to have it denied, or have insanely high premiums, if you are looking likely to need it.


A lot of employer offered plans at least have a guaranteed issue amount. Less likely on an individual plan of course, but even someone high risk if employed with a GSI plan could get some level of protection.

For me in my company I pay less than $25/month, I know co-workers who opt out…I honestly don’t understand that logic.


Switch to Apple or, failing that, Windows if he hasn't already. The UIs of these two operating systems are engineered for accessibility, and there are excellent screenreaders and other assistive technologies available that just aren't there in sufficient quality on open source.

Apple in particular is the only major technology company left that truly prioritizes the end user, and that goes for disabled users as well. So moving entirely to the Apple ecosystem will be a win for accessibility -- iPhones are perfectly usable by blind users.


Several software projects written by a blind person for blind people are fascinating even from a regular user's perspective. I've long followed edbrowse [1], CLI line-oriented web browser with ed-like command language, javascript support, and remarkable scripting capabilities.

Another one is Ecasound, a command line oriented multitrack digital audio workstation [2] and its front-end Nama [3] that even supports so-called non-destructive editing. A patient, determined person really can accomplish a lot solely on the command line; also, compare Ecasound's resource usage with with what it takes to run Pro Tools or something similar.

Curiously, the initial version of both Ecasound and Edbrowse was written in Perl -- I wonder if that language has had any particular appeal to blind users, e.g. because of its text processing abilities or regex engine?

Considering the maturity of the aforementioned projects, their mailing lists may give great insights as to how to do your computing as a blind person. I would also point out an essay on the user experience by Edbrowse's author Karl Dahlke [4].

Dahlke's homepage has other interesting links and applications as well, including speech software and an essay on subconscious from a blind person's experience. Mind-opening writings and works, even for a regular user like me.

I'm wishing inner peace and balance for the adaption period to the OP's friend. Best of luck, man.

1: http://edbrowse.org/

2: http://nosignal.fi/ecasound/

3: https://freeshell.de/~bolangi/cgi1/nama.cgi/00home.html

4: http://www.eklhad.net/philosophy.html


This is a good old thread on the topic: https://news.ycombinator.com/item?id=21898537


Thank you!


Honestly, i can‘t even read this. This is my biggest fear. Respect to anyone who keeps programming blind. To me, you guys and girls are geniuses.


For what it's worth, the effect of disability on happiness is not statistically significant - and definitely less than the average person's expectations.

https://www.bbc.co.uk/news/magazine-27554754


I am also a developer going blind. Mine is more of a slow descent than what it sounds like your friend is going through (Legally blind, will probably lose the rest in the next couple years.) Anyways, just wanted throw out my appreciation to all the other blind folks who took the time to relay resources. Also kinda nice to interact with other people on this particular road.


Emacs + Braille mode: https://www.emacswiki.org/emacs/BrailleMode

All the best for your friend's health!


As a developer who's not blind beyond a pair of glasses due to being nearsighted, but with other quirks that make me different, like being trans, it's hard to not help but notice not only the privilege that the sighted of us have, that nearly all of us take for granted, but what I find really striking is the sheer amount of emotional labor that sighted people expect from the non sighted in relation to their non sighted-ness. And I guess I really shouldn't be that surprised because it seems like that's the case with any sort of disability or Quirk or thing that makes someone different in a significant way from the statistical mean average.

Kudos to all of you who did that, and do that emotional labor for everyone without even so much as calling it out, I recognize it and while I also do it as well so people understand a different point of view I know how tiring it can get. And sincerely thank you for that point of view and the resources provided in this post thread. I know that your challenges in computer accessibility are ones that I had not really considered much Beyond just you know knowing that blind people use computers and there were probably blind devs obviously, but not having the fortune to work with any as of yet it's not been something anywhere even close to the Forefront of the accessibility challenges that I've considered. Which is quite silly considering it's as plain as the nose on my face, and yes pun intended :) . And using that emoji just made me wonder how well do screen readers handle emojis? I can only imagine what ascii art sounds like to a screen reader. Thinking about it further I realized that the only blind person I have ever personally known that I'm aware of that used a computer on a regular basis and played games, I used to play with in a MUD back in the BBS days of the early 90s.

Anyway to those of you who have challenges or lack of sight thank you for your Insight and perspective into those challenges you face every day and deal with every day. And show us that the human body and mind is incredibly resilient and adaptable.


My division where I used to work had a partially sighted director (macular degeneration). He could see enough to move around the building but couldn't easily read texty PowerPoint slides in presentations. It was a great lesson for us when doing presentations to focus on the verbal aspects rather than expecting him to just read lots of detail. This turned out to be a really useful communication skill in general.


I am not blind myself, but I have been using Linux for over 20 years and followed a few blind people who got into it.

If your friend wants to go the Linux route, I would suggest to pick a distribution with a blind community and if possible blind developers. Arch Linux would probably work, and I am pretty sure Debian too since a recent Debian Project Leader (Sam Hartman) is blind. There is also a Slackware derivative called Slint.

Regarding tools I have seen blind people use different things. Some use the CLI exclusively with a braille terminal and/or a tool like Speakup, others use GUIs like Orca.

There is a mailing list dedicated to blind Linux users: https://listman.redhat.com/archives/blinux-list/

Note: I'm not suggesting your friend picks Linux, it looks like accessibility on Windows is better, but I know if I was in their situation I would probably still pick Linux because I already have experience with it and using an OS without a GUI.

Best of luck to you and your friend!


These posts scare the hell out of me. I have fairly high myopia and astigmatism and it gets slightly worse every year.

Nothing serious but still scary.


There’s been a recent podcast episode on this on the Go Time podcast: https://changelog.com/gotime/209

All the best to your friend, may they have the power and support to go through this change.


FWIW, as a sighted person, may I suggest:

Put a blindfold on, and see what works and doesn't work. Take the blindfold off, and try to fix what doesn't work while you can. Rinse, repeat.


Another point besides the tooling: Lots of companies are hiring and holding a11y engineers in high regard, if one needs to learn these anyway why not put this new skill to use and make the world a bit better for people with disabilities?

I know that a11y is a requirement is lots of places, e.g. education -- you cannot sell a software with UI to a school if its not accessible. These companies are constantly on a lookout for people who know a11y tools well, and are often in a dire need for people who use these first hand.


Here's a cool story, from a few years ago, that I heard about here: https://www.vincit.fi/fi/software-development-450-words-per-...


For what it is worth, here is my advice as someone who has lost my sight over time and who have nevertheless continued to do a good deal of coding.

Firstly, my impression is that people tend to overestimate the practical challenges associated with vision loss and to underestimate the psychological side of it. These things are of course linked – the more quickly someone accepts the fact of their vision loss, the more open they become to using various forms of access technology. That said, coming to terms with vision loss is not something you can rush. It took me a long time.

Secondly, I don’t think it is necessary to try and learn everything before you actually need it. By all means, encourage your friend to install NVDA on a Windows computer and mess around with it when he feels like it, but for the most part I’d say use your eyesight while you have it. I don’t think he needs to worry about getting stuck if he doesn’t learn everything ahead of time. He might also be more motivated to learn new tools once he really needs them.

Two critical things to learn will be to touch-type, if he can’t already, and to use a screen reader like NVDA or JAWS. These are underlying skills that will make everything else easier. By learning a screen reader I mean learning its basic usage, but also learning a whole set of new screen-reader-specific short-cut keys. People who don’t use screen readers often make the mistake as thinking of it mainly as a text-to-speech engine, while in fact the essence of a screen reader is the multiplicity of ways it allows you to interact with the computer in non-linear ways. To be an effective blind/low vision programmer, it helps a lot to be a super user of whatever screen reader you use. The related thing that should happen naturally is that he will set the TTS voice to speak faster and faster – which brings us to another point. One of the most popular TTS engines is Eloquence, which in fact is one of the least natural-sounding TTS voices around – but that has the advantages of working very well when speeded up and having very high latency (at least that is my perception). In time he will find that things like voice latency really matters as your brain gets used to this new way of working. In any case, Eloquence may not sound like a first choice when you first hear it, but for the above reasons it is my recommended TTS for new users.

When it comes to actual programming, I have found that most things can be done, although one often needs work-arounds, and there almost always is a workaround. For example, for data analysis I often use the R package RMarkdown, which makes it easy to e.g. render data tables as screen reader-friendly HTML (while R Studio has improved somewhat, it still isn’t fully accessible). I mainly use WSL and either VS Code or Notepad, depending on what I want to do – unfortunately many IDEs are only partially accessible – on the whole I have had to learn to do more from the terminal than before, including debugging. In any case, I get along just fine working on c++ projects and I’ve found few obstacles when programming in Python, Julia or R (although Jupyter notebooks are a problem). Incidentally, for a long time I avoided Python because I thought the indentation would be a problem given my lack of eyesight – but when I finally tried it I found that having the screen reader announce the indentation level actually works much better than I thought it would and these days I am perfectly comfortable in Python. As with many things, simply trying it, rather than being limited by my preconceptions about what is possible, was the key.

Finally, there are online communities of blind programmers who will have already figured out many of the work-arounds your friend might need and who are generally very willing to help. An R-specific group has helped me a lot over the years – but there are many others.

Good luck.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: