Hacker News new | past | comments | ask | show | jobs | submit login
Blind software development at 450 words per minute (2017) (vincit.fi)
262 points by vinnyglennon on Dec 29, 2019 | hide | past | favorite | 53 comments



I am the author of "Software development 450 words per minute". As it's been 2 and a half years since I wrote this post, I thought I'd give a little update as to the tools I use since my preferences have changed slightly. (I know this comment would be more useful as an addendum to the original post, but since I'm currently on holiday it's just quicker to type everything out here for now.)

I switched to using Eclipse as my Java IDE about a year ago. The reason for this is that Eclipse is much nicer to use with a keyboard and a screen reader. The state of Java GUI accessibility is a little convoluted and I won't go into that here (look up Java Access Bridge if you're interested), but basically Eclipse works better and faster because its UI toolkit uses the platform's native accessibility API's directly. I guess the reason why I picked up IntelliJ in the first place was that literally every Java developer I knew at the time was using it, so naturally I thought it couldn't be a bad choice. Also, I had used Eclipse briefly in 2013, and let's just say it was nowhere as nice as it is today.

I have also ditched Notepad++ for daily development work. I do use it for notetaking and other similar things, but for non-Java based coding Visual Studio Code is all I use these days. I had experimented with VS Code when I wrote this post, but back then it still had some accessibility-related bugs that had to be ironed out. However, they were fixed a long time ago and I've been very happy with my experience so far. VS Code's team has been incredibly committed to ensuring their work is accessible, and these days I just expect things to work right out of the box. And this is something I usually never, ever do.

Other than that things have stayed pretty much the same. I'm still on Windows 10 and am using NVDA as my screen reader, neither of which is likely to change anytime soon. I've been slowly migrating to WSL from Git Bash but it hasn't really been a priority; for now I'm using both side by side.


I recognized the company name. Almost two years ago during occupational/workspace safety and health training (outside the Finland, where Vincit is) it was brought in as a good example of corporate culture:

https://bestworkplaceineurope.com/CultureAuditEn.pdf

This document seems to be from 2016. I am really interested in how has this evolved since then. Would you mind sharing your thoughts?


All I’ve heard is they have a misogynistic culture.

As an example:

https://caterina.net/2019/03/29/appalled-by-sexism-in-the-va...


It gets worse. Vincit is known for putting sexist jokes into their press releases, external communications–even in their financial statements. These have been reported by the press: Their jokes are juvenile. ”Kympin pitäjä” which means “great village” turns into “pimpin kytäjä” or “pussy place”. The Finnish Stock Exchange criticized them for this, but their response was to tell them to “lighten up a little”. On their quarterly results video they again made some appalling puns: ”surkea töihinottaja” became “obscene blowjob” and refers to the only woman on the management team–the head of HR!–as “cunt babbles”. People in Silicon Valley know that a CEO would be fired for that, but weirdly, nothing happened.

Yes, excellent culture. Sounds real pleasant. Pass.


Thanks a lot for that blog post (and the update)! That was super insightful and interesting to me.

Setting up a screen reader and actually experiencing the way a blind person views the web has been on my list of things to do for the longest time, but ditched time and again in favor of the latest hotness in our industry.

Reading bits like “crime against humanity” in relation to improperly used form elements are a good reminder of the responsibility we have when it comes to the semantic output of our work, which makes me wonder:

How usable are interface-heavy reactive UIs to you? How - if at all - are screen readers picking up on changing parts of an interface?

Which brings me back to setting up a screen reader myself... Is there a guide you know of to set up a VM to experience the web like a blind person does, or is installing a screen reader in combination with a regularly set up browser fine for accessibility testing?

Cheers and... I wish you all the best for your dev-career! I am super impressed.


> How usable are interface-heavy reactive UIs to you? How - if at all - are screen readers picking up on changing parts of an interface?

In general, screen readers don't automatically announce changes in a web page or other UI. For web pages, if something should be automatically read when it's changed or added, it should be marked as an ARIA live region, using the aria-live attribute. Some native GUI toolkits have a similar feature, e.g. the LiveSetting property in Windows UI Automation.

I strongly discourage using a VM to test with a screen reader, because audio in a VM is often annoyingly laggy. Just do it on your main machine. On macOS, you can turn VoiceOver on and off with Command+F5. On Windows 10 version 1703 (Creators Update) or later, you can turn Narrator on or off with Control+Windows+Enter. Another popular screen reader for Windows is the open-source NVDA (https://www.nvaccess.org/). For Unix-based desktops, GNOME has the Orca screen reader; other desktop environments have nothing AFAIK. iOS has VoiceOver, Android has TalkBack, and Chrome OS has ChromeVox.

Disclosure: I work for Microsoft on the Narrator team.


I see, thanks for the ARIA-hint and the quick tips regarding screen readers!


Welcome and thank you for the small inside view into your world. I hope to read more about those challenges and what we can do about it to make your life easier.

Enjoy your holidays!


Thanks for writing both the post and your follow up. I found both to be very learning.


do you have a github? I'd like to read code written by a blind person.


I can't speak for the OP. And while I'm visually impaired, I'm not totally blind, and I do my programming visually (though I often use a screen reader for other tasks). Still, I can point you at some projects by blind programmers.

The largest project I know of that's written primarily by blind people is the open-source NVDA screen reader, written primarily in Python with some C++. It's on GitHub here: https://github.com/nvaccess/nvda

For a fairly large, and long-running, project developed primarily by a single blind programmer, check out Emacspeak, written in Emacs Lisp with some Tcl, available on GitHub here: https://github.com/tvraman/emacspeak

Edit: I almost forgot about brltty, a Linux console screen reader designed primarily for braille rather than speech output, written in C: https://github.com/brltty/brltty

Now for a few small projects written by blind friends of mine:

tdsr (screen reader for Unix terminals, written in Python): https://github.com/tspivey/tdsr

libaudioverse (C++ 3D audio library, now sadly abandoned): https://github.com/libaudioverse/libaudioverse

tma (tmux automator, written in Rust): https://github.com/ndarilek/tma


There’s a chap in another office, someone i’d never met but interacted with occasionally. A productive guy, a good guy, competent and effective. 2 years go by and I only need to interact with him once a month or so but i know he’s good.

Fast forward to a change of role which means i interact with this chap a lot more frequently. All good.

Randomly one day while pairing with another colleague on something, he says “yeah when i found out XXX was blind i nearly fell off my chair”.

Come again?

He’s blind, you didn’t know?

Ahh i think we’re talking about different people, i meant XXX in the X office.

Yeah, he’s blind.

Cue 40 mins of “no wait, that can’t be possible, i know this guys work, it’s physically impossible that he’s blind”.

I try not to make a deal out of him but now that i know, i can’t help but see him as anything other than superhuman. I still struggle to comprehend deep down that viewed remotely through textual interaction someone can appear not just unimpared but actually one of the top N% of performers. He’s always switched on and remembers everything.


If you consider the human ability to process sensory information in whole - all senses, it's not so surprising that someone with one fewer sense might perform at a higher rate with the other senses.

Or to put it the other way, there's a LOT of mental energy spent managing (especially filtering/categorizing) visual sensory information. Remove that distraction, and other senses get more processing power.

This is not to minimize the high performance of the essay author nor developer XXX, especially given the fairly sight-biased state of computing. But if you were to find some new change in your capabilities, with time and effort you could greatly enhance your performance in other ways.


Two points, neither of which is a disagreement to the OP's comments:

1. Listening to speech at 450 WPM is a totally attainable skill for the sighted. It is not some kind of magical superpower that only the blind are blessed with. It's learnt by gradually increasing the speech rate and stopping just before the speech gets incomprehensible. For the record, I can't listen to text at 450 WPM and fully concentrate for long periods of time. Novels and other long pieces of writing I read at a much slower rate.

2. Processing visual information does take processing power. However, so does inventing and using coping mechanisms in a world that has been mostly made by and for the sighted. In most cases I rarely get any advantage of being blind. I just think of new approaches of performing as equally to the others as I possibly can.


Oh I'm not suggesting that you have an advantage, per se. But I'm speaking only about human speech audio processing (translating sound into words in your head) vs having a full screen of visual elements, plus all the other static and moving visual elements all around in one's field of view.

For an audio only analogy, I would liken it to being in a room full of people at a party and trying to listen to one person vs being in a room alone with headphones. You should be able to process speech audio better and faster with the isolated input compared to the literally noisy one.


Is this a case where there is a difference between legally blind (which can mean vision very impaired but some vision still occurs) and completely blind? If it is the former, you might not notice unless they had to use a screen reader/vision enhancement in front of you.


Wow. Call me impressed. Having tried to make sense of the spoken text in the audio examples of the post I am stunned what the human brain is capable of once more.

I know one can learn a lot. The brain is an awesome "machine" that is quite capable. But listening to what he is describing as his regular screen reader voice and speed. I am more than positively flabbergasted.

I really love the fact that he can work, have a regular work life and hope he does also have quite a great private life as well.

Kudos to the author.


There is a trader at JPM (I think, this was a few years ago) who has a similar set up. The computer reads out the prices/emails/research at like 400 words/minute. Totally incomprehensible.


You can try to speed up vids or audio when listening to something. I started doing that while cramming stuff, using 1.25 - 1.5 playback speed. You slowly start speeding up things, as you get used to it.


Actually I am doing this. Listening to a lot of audio books on my daily commute I find most regular speakers a little too slow for my liking. Especially in English (I am German and found that English seems to be often spoken slower in audio books).

I do 1,2 to 1,25. I did 1,6 for a time, but lost the fun in it as it felt too much of a rush and self optimization (more books in less time) instead of the joy of listening to a great story or great insights.


Whats most impressive is the brain's ability to absorb that much information at such speed. Somehow I was able to discern the numbers 120 and 450 in that snippet, but not the rest. I suppose with years of practice this might be possible. It could also be one of those things where the brain compensates by developing other processing areas. This strikes me as an insight into the future of learning and information processing.


Consider the amount of mental energy that goes into processing visual input. Not to suggest what he does is easy (not at all), but vision is an enormous drain on human processing power.


There's an amazing 10min !!Con 2016 talk by Sina Bahram, How I Code and Use a Computer at 1,000 WPM!!

I use a computer very differently than most people, because I’m blind. When I’m surfing the web, tweeting, checking email, reading the news, and writing code, I’m doing so because a program called a screen reader is reading me what’s on the screen. I happen to listen to it read me this text at a thousand words per minute! Join me in listening to how I experience some common user interfaces. Yes, I’ll slow it down for you. I also have a challenge for everyone in the audience. Can you get through a day only using the keyboard? What about not looking at your screen?

Sina Bahram is an accessibility consultant, researcher, and entrepreneur. He is the founder of Prime Access Consulting (PAC), an accessibility firm whose clients include high-tech startups, fortune 1000 companies, and both private and nationally-funded museums.

https://www.youtube.com/watch?v=G1r55efei5c


> There's an amazing 10min !!Con 2016 talk by Sina Bahram, How I Code and Use a Computer at 1,000 WPM!!

At 6:28, Bahram points out how HN (!) is not very accessible to screen readers.


At some point in there, he says, "Table? Is this 1999?"

pg explained several years ago why Arc's web library, and by extension HN, uses tables. [1]

> Arc embodies a similarly unPC attitude to HTML. The predefined libraries just do everything with tables. Why? Because Arc is tuned for exploratory programming, and the W3C-approved way of doing things represents the opposite spirit.

[...]

> Tables are the lists of html. The W3C doesn't like you to use tables to do more than display tabular data because then it's unclear what a table cell means. But this sort of ambiguity is not always an error. It might be an accurate reflection of the programmer's state of mind. In exploratory programming, the programmer is by definition unsure what the program represents.

> Of course, "exploratory programming" is just a euphemism for "quick and dirty" programming. And that phrase is almost redundant: quick almost always seems to imply dirty. One is always a bit sheepish about writing quick and dirty programs. And yet some, if not most, of the best programs began that way. And some, if not most, of the most spectacular failures in software have been perpetrated by people trying to do the opposite.

> So experience suggests we should embrace dirtiness. Or at least some forms of it

It seems to me that the iconoclastic, anti-authoritarian, "unPC" hacker spirit reflected here can be taken too far, and this sometimes has a negative impact on the experience of some users, as Sina demonstrated. Now that HN's core UI is way beyond the exploratory phase, I wonder if it's time to re-do some of the markup in a more structured way.

[1]: http://paulgraham.com/arc0.html


pg was just wrong here. Or rather, it's a position that could've made sense at the time, but only if the thing you're most interested in experimenting with were mocking up page layouts to e.g. present to users for UX testing; layouting in 2008-era CSS was not something you necessarily wanted to do up-front, whereas with tables you could at least copy some boilerplate and have at it. (It makes no sense in context; it doesn't make sense in most contexts. But it is a position which could've been defensible at the time in a different context. Problems with tables include that (a) you don't want to write into a table layout; it's at best tolerable if you paste text into a template, and (b) there is no natural transition away from a table layout other than just redoing the layout wholesale. So even if you're experimenting with it, you probably should not ever let it go into production, because once it's there you will get convinced to leave it there for a long, long time.)

It's been a somewhat less sensible position since flexbox support became widespread, and is frankly just wrong in 2017.


On one hand... the idea that tables used for representing tabular data and tables used for layout couldn't be distinguished and managed functionally is odd. Lynx could handle table layouts reasonably well by the end of the last century, and I don't think you'd even need an ML model to code up some reliable heuristics and get them into most browsers, though I expect ML could be used to pretty good effect.

On the other hand, getting universal buy-in on a different approach to semantics and ubiquitous client adoption is harder than just buying into the agreed on semantics. And that recognizes the opposite of the hacker spirit. "Real hackers", if that term means anything, recognize the value of adhering to semantics by convention in all sorts contexts, even lisp-y ones. And while I have no idea what other things Arc's web library does, it's pretty clear to me that there's little if anything in HN's layout that wouldn't benefit from using CSS over tables for managing layout.


We really don’t do enough for developers with disabilities.

Voice recognition, TTS, gesture recognition, eye tracking, etc could be added to a modern turnkey development environment allowing life changing options for people.

I recently saw this from a developer at Google who created his own solution:

https://news.ycombinator.com/item?id=21772965


We really don't. Even your average developer can benefit a lot from the necessary changes needed to make a website or application more accessible to people with disabilities, but for a lot of companies it's really difficult to make the case to management why we should work on something that won't increase their profit margins drastically.

I've done a little bit of pushing forward for some of this stuff myself at companies I've worked at just because it makes you a better developer in general. It makes you more cognizant of all the design decisions that you make when you start thinking of things like 'will this site i'm working on actually work for people using screen readers' and what not.

Unfortunately unless there's either a strong open source push towards standardizing a lot of these considerations or legal requirements to make sites accessible I don't see companies adopting it en masse.


> I don't know that a check box is a check box if it's only styled to look like one. ...a crime against humanity.

Please style your elements and use alt-text, people.


I always try my best to make things as accessible as possible by following accessibility standards and semantic elements the best that I can. It is more difficult than one would think though, especially without fully understanding the perspectives of those who need the bit of extra consideration. I'm sure I fall short on many things which would help these people use my software more effectively, but this article provided some interesting insights.

I know there are some checklists and some tools out there which help verify certain accessibility standards (for web applications at least), but does anyone know of a better way to test/verify accessibility without actually getting e.g. someone blind to try things out and provide feedback?


Personally when I write webpages, I turn on VoiceOver on my Mac and run through the page's features. Just doing that usually points me to a few pain points that I can alleviate with ARIA and different HTML structuring choices. For hard mode and possibly better results, close your eyes and only rely on the screen reader.


That makes sense. Actually trying out the accessibility features myself sounds... interesting.


A lot of it is manual, but there is this handy chrome tool to catch the obvious problems: https://chrome.google.com/webstore/detail/wave-evaluation-to...


Isn't the point being made there that semantic markup is not the same thing as styling?

The dream of the semantic web being semantic all the way down died, there's no reason to use CSS and javascript to create things that act like forms.


Julia Ferraioli's Writing Accessible Go https://www.juliaferraioli.com/presos/writing-accessible-go/ talk provides great insight on how difficult might be to read source code for someone with a disability.

We should aim to make our code accessible for blind developers and for multiple good reasons including opening up opportunities for people and making sure our code is readable for everyone.

Often I come across code that wouldn't be there if a blind developer had to work on it just because it would be so hard for them or anyone else to understand without wasting a lot of unnecessary effort that the only plausible solution is to rewrite it nicely. One example is "decorated" code that provides lots of abstractions with no benefits and difficult understanding.

https://www.youtube.com/watch?v=cVaDY0ChvOQ


What a fantastic blog post! I particularly liked the audio samples and learning about what tasks the author uses a screen reader vs. braille for.

I also liked hearing about how they're handling code reviews with linear diffs instead of side-by-side. That's something I'd never really thought about from an accessibility perspective. There are so many little things around accessibility like that that aren't obvious until you really think about them.

I appreciate hearing the author's experience and learning a bit more about what it's like being a blind software developer.


I'm wondering how as a non-blind person i can actually learn to listen at such high words-per-minute. I don't need to read the screen in general but research papers (pdfs etc.)--while i'm looking at something else; so i can basically multi-task. Or to 'format audio' so that it can be sped-up while the words are distinct and understandable even when spurted-out at high speed--which is different from just changing the playback speed.

I remember articles about echo-location, not only as a means for the visually challenged to navigate the streets but as an additional sense-enhancement tool/technique. I'm thinking of screen-reading having the same function.

I guess an ideal 'environment' for the sort of 'syntopical' style of reading that academic research requires, is something that can involve as many senses at the same time as possible according to some 'sweet-spotting' technique that makes things manageable. I'm thinking: a minority report type interface where i can arrange papers/snippets-of-papers across a large 'space', while having bits (such as the sentences before and after the bits i just snipped) of read out fast to me with some sort of screen-reading. Involving vision, 'touch', sound as well as 'movement' would be awesome. One would need to train a bit of course.

--also imagining how you can do semi-synchronous group work:

Let's think about a scenario where there are like 10 people on a conference call. Usually people speak in turn. What if we 'allowed' sub-groups of people to have their own mini conversations before 'regrouping', but as we do that each person would be played-back the conversations they may have missed in fast form. This would be quite a different dynamic for a conference. A lot more stuff could be discussed in the same amount of time. It would of course come with its own set of problems.


> which text editor do you use? Spoiler alert: The answer to this question doesn't start with either V or E.

He mentions that he uses vim occasionally, but I wonder if he has ever tried Emacspeak. It seems to have quite a dedicated set of users.


No, I haven't really tried it. I know that it exists and that there are people who like it, but making everything work under Windows is a hassle I haven't bothered going through. (Basically I would have to write an Emacspeak-compatible speech server that would use Windows's TTS for speaking.) It could also be that Emacs and braille wouldn't work very well together in Windows, but this is also something I haven't really explored.


This inspired me to minimally clean up our March 2019 SCALE17x presentation "Accidentally Accessible: a Mostly-FOSS Workflow:" https://youtu.be/FvrQPdt1X30 Unfortunately the default view is clipped very badly and doesn't include Chris' screen while he's comparing writing python in emacs versus nano, but otherwise I think it turned out all right.

We're hoping to do an updated version at LinuxFest Northwest in April, assuming the presentation is accepted.



I wonder what would be the best programming language syntax and editing format for blind people if designed from scratch.


Probably something very concise/dense/terse. https://blog.revolutionanalytics.com/2012/11/which-programmi...

Potentially J or Lua, which were not analyzed in that article. However, Python has a lot of different styles, and those using comprehensions and Julia can often be incredibly concise, but typical highly-readable Python code probably is not particularly concise.


Wow. This was an awesome read! Very enlightening to me. I would like to attempt to do one coding project blind soon. I already regularly blindfold myself and walk around my house as I feel it has helped me hone in on using my sense of smell/hearing to navigate.


Just tried writing some elisp (uing paredit) and it's surprisingly hard to do even with paredit.

Which language is the nicest to blind people ? python ?haskell ? brainfuck ?


Which language is "nicest" probably comes down to experience and personal preference. For me Lisp's feel a little dense, and I find that I have to concentrate more than usual when working with e.g. Clojure. But this may not have anything to do with blindness in particular; I grew up with Python and various C-style languages and those feel the most comfortable to me. I know some very capable blind Lisp developers and their experience is probably opposite to mine.


My experience with Lisp and Scheme is that after a bit the brackets sort of become invisible for you, but that is of course for a sighted user - does a screen reader say open bracket close bracket or anything like that? Because I think that would add to the density.


It does, yes, but with Braille I suppose the effect could become similar eventually. Actually, I find myself ignoring certain speech patterns in a similar way. For example the word "blank" which is what my screen reader says whenever encountering a blank line. I have long since stopped thinking of the word blank in those situations. It's just a sound that means that there is an empty line. I guess I would eventually stop thinking of brackets as brackets but rather just signals for grouping certain expressions, and the scope and context of those expressions would sort of come automatically much like blocks in other programming languages.


Maybe Go. Python would suck because of whitespaces.


This is awesome! Thanks for sharing it.


Dude's an inspiration.


I think people like this guy are superhuman. I just bow my head at what can they achieve with so little.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: