Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Any other blind devs interested in working on dev tools for the blind?
530 points by blinddev on Oct 28, 2016 | hide | past | favorite | 133 comments
Just lost my fourth job this year. I can't definitively claim it is because of my blindness, I may just not be very good at software development. But one thing is for sure, there are many dev tools that are either hard or impossible to use if you're blind. I'm looking at you JIRA, GitLab, Chrome Dev Tools, etc.

I'm tired of feeling helpless. I want to build tools that make my life easier. Any other blind devs in the same boat who would be interested in collaborating?




I'm not blind but here's an (unsolicited) project idea for you.

To be candid, I have no idea what it feels like to be blind and have never paid much attention to accessibility other than reading a tutorial or two and making sure I use alt tags on my images. The main reason for that is that I'm lazy and based on my experience, most developers are in the same boat.

Now, if there was a service which would spin up a remote VM session inside my browser (a bit like BrowserStack or SauceLabs do) with all screen reader software setup and no screen (only audio), it'd make it a lot easier for me to experience my software as a blind user. There should probably also be a walkthrough to help new users use the screen reader and help them get started. If you're lucky, you could turn this into a business and it could indirectly help you achieve your goal of making better software for the blind by exposing more of us to your issues.

Anyways, I know you probably have more pressing issues to solve and I hope I didn't come across as arrogant, just throwing the idea out there.


As a partially sighted developer specializing in assistive technology, with several friends who are totally blind, I think this is an excellent project idea.

The cheapest way to do it would probably be using the Orca screen reader for GNU/Linux, probably combined with the MATE desktop (forked from GNOME 2) so one doesn't have to worry about 3D acceleration in the VM, which will presumably be hosted remotely on a cloud provider somewhere. The main technical challenge that springs to mind will be capturing all keyboard events in a browser window. This is particularly important because screen readers tend to rely on esoteric keyboard commands, which repurpose keys like CapsLock and Insert as modifiers. I don't know if this can actually be done in a normal web browser.

Anyway, just throwing out my quick thoughts on this. I don't currently have time to pursue it further myself.


> I don't currently have time to pursue it further myself.

I work for a non-profit where we tackle accessibility issues related to the web, documents, and tech in general. We have a few Vagrant boxes that we use for development and testing, one of them is a Fedora box (GNOME 3 though) that comes with Orca configured [1] so that it doesn't prompt you for setup options. Chrome and Firefox are installed as well. If you have Vagrant and VirtualBox installed you can make use of it like so:

    vagrant init inclusivedesign/fedora24 && vagrant up
The box is ~2 GB. This is the repository for the box in question:

* https://github.com/idi-ops/packer-fedora

* https://atlas.hashicorp.com/inclusivedesign/boxes/fedora24

We track Fedora releases and update boxes fairly regularly so there should be a Fedora 25 one with Orca once there's an official release upstream.

I hope it can be of use to anyone here. If you have any questions we hang out in #fluid-work on Freenode.

[1] https://github.com/gpii-ops/ansible-gpii-framework/blob/mast...


Do you think Fedora is a usable option as a totally blind person? I'm a totally blind software developer and looked at Linux several years ago. It didn't "just work" and since I already have Jaws for my job which requires Windows I never bothered trying to use the Linux GUI for an extended length of time. I'll have to look at this.


I'm blind and prefer Fedora because it tracks GNOME more closely, and GNOME seems to reliably get accessibility right whereas Ubuntu/Unity doesn't.

So admittedly my development workflow isn't super high-tech. I do lots of JavaScript, some Rust, and a few others. All my languages have reliable command line tooling, which of course works well under Linux.

Some blind folks advised me to try Windows because it was supposed to make me more productive. I tried it for about a year and a half. I've used Linux since Slackware96, and whenever something failed under Windows I was stuck googling error codes and tracking down system logs. I can launch a Linux system upgrade from the command line. If it fails, it fails for an obvious/searchable reason, and prints its failure cause in the terminal. I don't have to track down logs in non-standard locations, google odd hex codes, etc.

Under Windows, the best I could find for accessible JS/Rust dev was Notepad++. That's just an enhanced text editor. At that rate I might as well use Gedit/Vim under Linux for development, which I do and it works well.

If you're developing heavily in Windows specific tech, then Linux wouldn't be a good fit. But as a technical user I'm quite happy with Linux generally, and Fedora specifically. About the only accessible Windows things I miss are audio games and Netflix, and my VM satisfies most of those. There are corner cases where Orca/Firefox act up, but under Windows there were lots of cases where I fought the OS, so there's just no perfect solution. I'd take a stronger foundation over slightly less accessibility any day.


Windows 10 just works and now we have WSL (i.e. bash i.e. run all your favorite CLI tooling). In general, Microsoft has gotten a lot better about Windows just working. Perhaps Linux has too, but you still had to make sure you had fully Linux-compatible hardware and then do extra steps to get Wi-fi, last time I played with it.

Also, the web browsing experience on Windows is so much better, and the audio stack doesn't fall down at the drop of a hat because you edited a config file wrong (hope you have someone sighted who knows how to unedit it for you). I'm not sure I'd call Linux a stronger foundation; this was not at all my experience with it. OS X is, but then desktop Voiceover sucks to the point where you can't really program with it (basic things like terminal do odd things, nevermind the 10 or so keystrokes needed to navigate from code to the project explorer in Xcode. And we have to mention the speech latency). Then they just killed the function keys, which is an additional problem knocking OS X off the list.

But I think the biggest thing about Windows for me is that it's got synths which are capable of being intelligible upwards of 800 words a minute. Linux didn't even let you get at these settings via Orca last I tried it, and you can't set the inflection either, so it never emphasized punctuation. When your interface is linear and top-to-bottom, the biggest bottleneck in the general case is how fast you can go with the synth, and any platform which significantly cuts this down is therefore not a winner in my book.

But whether or not you agree with my points, I consider it pretty clear-cut that only a blind programmer even has the option of trying Linux in the first place, and certainly not a new one at that. You need too much knowledge to have even a halfway decent experience. In terms of making things accessible and having them matter, you've got to hit Windows first.


I didn't know that Fedora's accessibility was better than Ubuntu's. I had pretty much stopped using Linux on the desktop because Orca in Ubuntu just left me wanting more. But I'll definitely give Fedora a try now. Thanks.


Do you know if Eclipse works for Java programming with Orca? That's what I use for my job so could not use Linux as my primary OS if it does not work.


This isn't going to answer your question but the reasons why we support this Fedora VM image is because our projects make use of it in CI environments and also because one of our team members worked on the GNOME screen magnifier. I saw your question regarding whether Eclipse paired with Orca is a viable, accessible option on Fedora for Java development -- I don't use Eclipse but I can ask around and get back to you. I would suggest that if possible you could try Eclipse in the Fedora environment I mentioned with a project you're familiar with; I can help with at least installing and configuring it but ultimately customizing something like Eclipse is a personal and soul searching experience :P

BTW, we provide a Windows 10 Vagrant box [1] as well. I just didn't mention it because it doesn't come with NVDA or the evaluation version of JAWS yet. That will happen soon though.

[1] https://github.com/idi-ops/packer-windows


> I work for a non-profit where we tackle accessibility issues related to the web, documents, and tech in general.

What's the non-profit? Expose UX [1] is preparing an episode focused on accessibility for Global Accessibility Awareness Day: startups will get judged by UX judges based on the accessibility of their products. Would love to connect with your org.

[1] http://ExposeUX.com


It's the Inclusive Design Research Centre at OCAD University. My email is in my profile, feel free to reach out :)

http://inclusivedesign.ca/research/ocadu/


This is really cool, thanks for sharing. Will try it out.


Very nice resource, thanks for putting this together and sharing it.


I feel like you and I have had a vaguely related conversation before, but do any of the remote desktop access protocols support piping sound in addition to graphics? That might be a quick-and-dirty way to prototype something. I've been looking for a reason to play with Elixir/Phoenix, and if there's interest in this then I may try some sort of one-click Orca VM that pipes everything back to the browser. Interesting idea.

Also, I feel like there was an early version/prototype of NVDA Remote that ran in the browser. I remember going to a page, turning on forms mode or whatever NVDA calls it (I've been out of the NVDA loop for a while) and I could send keys/get audio from the remote machine. I think that was before the addon was available so I'm pretty sure it was web native, but I could be misremembering. I don't think there's anything preventing transmission of the Insert key, at least. Capslock or other esoteric modifiers may be trickier.


Hmm, I don't remember any such prototype/demo. However, the NVDA remote protocol is pretty simple, JSON messages over a TCP socket. Putting those messages on a websocket should be fairly straightforward. With the web speech API landing in Firefox as well as Chrome[0] you could even synthesize the speech at the client. There would still be the problem of sending special key sequences. Not just ins/capslock, but also shortcuts that are normally capturen by the browser/OS might get tricky.

Feel free to contact me if you want to develop an NVDA remote server in Elixir. I need a real project in Elixir to do more work in the language. I did some small Elixir projects and like it a lot.

[0]: https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_...


I'm pretty sure the NVDA Remote that runs in the browser still exists. You have to have the add-on on the target machine, however.


RDP can carry audio.


It can, but not with a suitable latency for actual use. What you have to do is send the speech and ersynthesize on the far end. There's an NVDA issue for supporting RDP channels [0], but NVDA Remote made this very low priority and it's potentially blocking on another one to refactor how NVDA handles speech.

0: https://github.com/nvaccess/nvda/issues/3564


Even better would be to get something that can run in a docker container that you can just download and press go!


But that's a lot harder to monetize!


There's a guy who has a book on computer vision, and he sells more expensive versions that come with test images, so it might actually be worth it.

See- https://www.pyimagesearch.com/practical-python-opencv/

I've been working on a search engine for lectures (https://www.findlectures.com) and accessibility is an area I'd like to explore, but it's tough to know how to test something as an end user would see it.


This is really interesting. I've often thought about a browser extension that could describe a web page. I don't mean text, as screen readers are (usually) more than capable there. I mean a visual description like:

"The web site has a off white background with black text. There is a horizontal menu at the top that fills the screen 95% horizontally and 10% vertically. The horizontal menu has a navy blue background with white text. There is a logo on the left of the horizontal menu filling 25% of the screen horizontally and 5% vertically. There are five menu items to the right of the logo image in the horizontal menu."

That's probably a little more verbose than it needs to be. But it could be a combination of CV, tapping into the renderer and traversing the DOM in order to best describe a page. Then if my screen reader isn't cutting it and I'm feeling lost I can just have the current view described to me.


> The web site has a off white background with black text

I'm genuinely curious noe: how would the descriptions of color help?


It can provide two useful functions:

- Context in places where color is used to relay information but the designer failed to provide alternative means of gaining that context without sight. For instance, if a text field is "grayed out" to indicate a certain state on the field, but that state isn't independently communicated by a screen reader.

- The ability to more easily describe the page to a sighted person.


Also, some context for the blind person when designing their own pages. If they're aware that website X is considered to have a good design, they can get a description of it and know what colour schemes, positioning and placement, etc might work for them.


That's a really fascinating idea!


just close your eyes !?


I know you can do that for blindness, but the challenge I see is gaining a good understanding of how people would actually navigate a site. For instance, perhaps someone who is legally blind might use a screen reader, or just magnification, depending on their preference.

From what I've read, screen readers are typically played at a very high speed (once you get used to it) - I don't know how you'd know things like that without advice from the people using them.

The second question is how you're supposed to navigate if you can't see the text well or at all - this might require adding features to the HTML to integrate with the screen reader.

I can visualize what my website looks like already, so if I shut my eyes I'd just be navigating it in my head.


Would you say there is a decent market for digital accessibility consultants in order to assist designers in this way? I know a blind person in IT who is actively looking for work like that, but is having trouble figuring out where to start.


I don't know if there's a market for this, but I'd be interested in exploring it. You or your friend can get in touch with me, my email is in my profile.


I don't know either, but I've definitely seen software projects that had "538 compliance" as a requirement that people worked on, so it might come from that budget. My recollection is this was required for certain government sales.


Orca is a bad idea. Very few blind people use Linux because Linux desktop sucks, and Orca is always at least somewhat behind all the other screen readers. I have met a few blind people over the years who have used it for an ethical reason or because they can't afford Windows, but no one really likes it. Linux is indeed the cheapest option, but it's also almost completely ineffective in terms of making it actually work well for your users.

For starters, even getting low latency out of the Linux audio stack is a major headache, and the synth situation is abysmal. You can't even touch the config files for these yourself because if you break either--even temporarily--you now can't use the computer. Then you get into how all the graphical desktops have accessibility issues to one degree or another and how you have to use a separate screen reader for anything outside them.

What you want if you want a testing VM that actually has value is Windows and NVDA. NVDA is free in both senses and kind of the industry standard for sighted testing now. Jaws is still more popular, but this is slowly shifting in NVDA's favor. This would be because Jaws costs roughly $1000 per user. The advantage of NVDA is that you can be sure all users can have it, and if it works with NVDA then it's very likely to work with jaws without too much more work.

But sadly you can't just test with one; in the end you have to test with all of them. Things like aria are nice if used correctly, but the aria spec doesn't say much about what screen readers have to do, and no one implements 100% of it. It's very close to the situation with needing to test on multiple browsers.


You aren't coming off as arrogant at all. I've thought a lot about this, largely because I'm faced with it every day. There are probably two ways to approach the accessibility problem:

1. Make the inaccessible accessible through clever tools, relying on CV or similar. 2. Make tools that reveal to sighted devs how accessible their software really is, much like you've described.

These two approaches tackle the same problem from opposite ends and would hopefully meet somewhere in the middle. I view #1 as empowerment, getting back abilities that one has lost (or never had to begin with). I view #2 as awareness, giving the sighted visibility into where their software falls short in terms of accessibility.

I'm not sure which would have the biggest impact. But in my mind, I'd rather be empowered by technology. I'd be curious what other blind devs think, though.


I agree that it's good to make our own accessibility where possible. However, computer vision seems like gross overkill for making computer software itself accessible. After all, the information you need is already in the computer; it's just not yet being exposed to the screen reader in a standard way. So processing a screenshot or image from a camera would be very wasteful, though it would work as a last resort. But other hacks to make software accessible? Absolutely! For example, one can imagine writing bits of JavaScript code to fill in gaps in the accessibility of specific websites and applications.


That is indeed a very good idea.

I have been blind in the past for months due to an accident when I was a kid. Fortunately I was lucky enough that a brilliant professor was able to restore partial eye sight. Enough for me to be independent and to be a software developer by profession and traveling the world whenever I get a chance.

One of the things I found out was that it is very hard to explain to people to tell them what it is like to not see. One of the popular questions was "So what could you see?" Don't get that question often these day, but I generally asked them to think at how much they can see with their hands.

When you can't even imagine how it is like, going that one step further on imagining how blind people are able to navigate your application/web site is a step beyond even that. Right now you only have things like web accessibility standards and tests for that. It helps, but it is not the same as "navigating the app like a blind person".

If there's an easy way to test and experience your app/website then it will also be easier to get a requirement like that past a C-level exec.

Sorry that I don't have much to add at this stage as I'm in the middle of starting up a new product myself, but you are always welcome to contact me (contact info is in the profile)


The iPhone includes a VoiceOver[1] feature called Screen Curtain which keeps the digitizer on, but turns off the screen. This helps blind users save a lot of battery and has helped me, as a developer, experience using my app as if I was blind.

[1] http://www.apple.com/accessibility/iphone/vision/


For web apps you can install the tota11y [1] browser extension and use its experimental Screen Reader Wand feature to get an idea of how a screen reader will interpret elements.

tota11y uses Chrome's Accessibility Developer Tools. Deque maintains browser extensions that use their own open source engine [2]:

http://www.deque.com/products/axe/#aXeExtensions

Both engines could also be used in CI environments to perform a11y audits. That should help web developers target at least low hanging fruit.

[1] http://khan.github.io/tota11y/

[2] https://github.com/dequelabs/axe-core


> For web apps you can install the tota11y [1] browser extension and use its experimental Screen Reader Wand feature to get an idea of how a screen reader will interpret elements.

An idea, perhaps, but that's all it would be rather than real-world data. Blind people don't actually use these tools.


More useful perhaps to spin up a blind person to remotely use your app or site? A sighted developer who only has experience using screen readers with one app is going to be like a chef with no taste buds.


Maybe I'm an idiot and completely missing something here, but couldn't this essentially be accomplished by enabling screen reader software on your own computer and turning off your monitor?


You could easily extend something like this to simulate the various age related sight degenerations, partial sight and the varying stages of blindness. That's as well as the high incidence of sight issues with type 2 diabetes, and other common conditions.

By the time someone reaches 50 there's a good chance a proportion of text on their phone, web, computer and even groceries that is becoming unreadable without glasses.

Most app developers haven't a clue. Most of us 50 somethings hadn't a clue 10 years ago! Ctrl + in a browser is a brute force solution. Android's is even worse and a lot of what you want zoomed simply isn't, but it enlarges the parts you can read just fine.

Compared to many of my age I'm lucky and rarely need glasses, but already it's very annoying!

Something like this could be as helpful as when I first saw colour blind simulators 20 years or more ago.


Fair question. On macOS, you can enable VoiceOver by simply pressing Command+F5, and there's even a built-in tutorial. Windows has a built-in screen reader called Narrator, which you can enable with Windows+Enter, but I don't think anyone seriously uses that for their daily work yet. So to get the real experience on Windows, you'd have to head to http://www.nvaccess.org/ and install NVDA. But that's also pretty straightforward.


The best way to evaluate software as a sighted user is to learn how to use the assistive technology with the screen turned on.

Sure you can play around with the screen turned off to get some sense for the experience, but with the screen turned on you can compare the visual experience with the non-visual experience.

Another issue with testing with the screen turned off is - if an element isn't accessible, how are you going to know that it should have been there...


To add, and perhaps this is a terrible idea due to naivete on my part, but if there are different voices/speakers on a page, make a special tunable stream where you can "listen" to all convos but tune in to the interesting ones, kind of like we do when at a gathering and overhear lots of convos but walk up to the ones which pique us and listen and even pipe in...

The tunable part should be made easy to do from a user perspective via some kind of "dial" "mechanism"


I'm also a blind software developer. I scrape by building apps[0] and services[1] for other blind people, and running the occasional crowdfunding[2] campaign.

First off, you're 100% correct when you talk about how devtools are inaccessible. This problem is an historic one, stretching back as far as early versions of Visual Studio, or other early IDEs on Windows. Basically, the people who build the tools make the tools for themselves, and not being blind, make inaccessible by default tooling.

I do most of my work on Windows, using the NVDA screen reader, and consequently I have the ability to write or use add-ons for my screen reader to help with a variety of tasks[3]. This being said, this always means more work for equal access, if access is even possible.

I'm interested in any sort of collaborative effort you might propose. Targeting accessibility issues in common devtools does seem to me like a reasonable place to start attacking this problem. I had read a few months ago that Marco Zehe, a blind developer at Mozilla, was pushing some work forward for the Firefox devtools[4], but haven't heard much about that recently, and I think they might be distracted trying to get a11y and e10s working together.

Basically, I'm interested in helping in any way you might suggest, and from the thread here it looks like there are some enthusiastic people at least willing to listen. My email is in my profile, let's make something awesome.

[0] https://GetAccessibleApps.com

[1] https://CAPTCHABeGone.com

[2] https://www.indiegogo.com/projects/nvda-remote-access/

[3] https://github.com/mohammad-suliman/visualStudioAddon

[4] https://www.marcozehe.de/2016/01/26/making-the-firefox-devel...


Is there anything that would be a relatively small change, but would make a big difference to blind developers?

If it's closed source like a feature in Visual Studio or some other company, I'll volunteer to ask them for it.

Maybe low hanging fruit is the easiest way to convince people at first.


I am sighted myself but I work with a company called Bristol Braille Technology and we are trying to make an affordable multi-line Braille e-book reader.

If you have an interest in Braille and have software development skills there might be something to do there. The UI program that drives our prototypes is open source and available on GitHub. https://github.com/Bristol-Braille/canute-ui

We have plans to open source the hardware as well.

If you want to add support at a lower level, our current USB protocol is outlined in this repository. It is a a dev-kit I knocked together to allow some Google engineers to write drivers for BRLTTY (and thus for ChromeOS). https://github.com/Bristol-Braille/canute-dev-kit


Hi, I'm also a blind dev - successfully been developing back-end systems and libraries at Microsoft for over a decade. There are certainly accessibility problems, but the awesome thing about being a dev is that you can also make your own solutions. Look at T V Raman at Google, and Emacspeak - which whilst not everyone's cup of tea, certainly serves him well.

For any developer, it's important to practice your craft, and when looking for a job, it's valuable to have a portfolio of work you've contributed to. So you can get multiple benefit by helping create a tool which will help you be more productive, and also show your skill.

Clearly, this project should be something that you're passionate about, but one project I've had on my when-I-have-time list is below - I would be happy to work with others who are interested (@blinddev @ctoth @jareds).

After your text editor / IDE, one of the next most important tools is a tool for tracking bugs/tasks. Unfortunately, many of the common ones, like VSTS, Jira, and Trello, are either not accessible, or at least not productively usable with a screen reader.

Over my career I've developed my own scripts for working with such systems, but it would be good to have something that others can also benefit from. I should probably put my initial bits on Github, but time is currently consumed by other projects. Email me if this interests you. Also happy to mentor on general career challenges around being a blind software engineer.


Low vision programmer here. I've made a few tools that make my own programming easier, like a lightweight version of Emacspeak https://github.com/smythp/eloud (now in melpa) and just gave a talk on blind hackers and our tradition of bootstrapping: https://www.youtube.com/watch?v=W8_O3joo4aU Would be happy to help out with a project, email at my name + 01 + @ + gmail.com.


I am an adjunct professor in a CS department. I usually end up with introductory level courses, often for non-majors. This semester I have a visually impaired student in an introductory Java course who is unable to see the screen. He uses JAWS as his primary screen reader. To my great surprise, most of the tools we typically use were completely inaccessible to screen readers. I spent the first several weeks of the semester scrambling to find a reasonable set of tools that would work for him. We settled on Notepad++ and the terminal. Also, I provide him with special versions of the slide decks, readings, assignments, quizzes and exams.

I would be very interested to learn how visually impaired developers such as yourself and others got started, and for any suggestions for how I can make my student's experience more positive.

Thanks.


The solution you've settled upon (text editor plus CLI) is pretty much the best you and your student can hope for at the moment. When I was at university, speaking as a screen reader user myself, the very first thing I asked my tutors was whether I could skip all the Eclipse learning and jump straight into command line compilation.

Further down the line, the student might find that programming is no longer so new to them that they can afford to explore something else. But at the moment, if they're a beginner, trying to learn a tool like an IDE, which is supposed to make your life easier but generally has the opposite effect for blind devs, is just going to confuse matters. So I would stick with what you're doing, because you probably won't get any better support from the disability services team at the university because it's not an area they know.

Your student needs to learn, sometimes the hard way, that if you're blind and want to make things, you have to be prepared to do things in ways which go completely against the grain - having to basically use Windows to be productive is one of them - and to solve these boilerplate accessibility problems without becoming discouraged.


Thanks. That is very helpful.


You sound like a good person.


I'm a totally blind developer and have some of the same issues you do. As far as Chrome dev tools go I've given up on doing any kind of UI work, partially because of accessibility and partially because it does not interest me. My current job is working on a large Java web app. Luckily my company is understanding when it comes to UI so I don't do much of that work but do a lot of API and database work. API's can be tested using curl and database stuff can be done from a command-line. The advantage to working on the app is if accessibility gets completely broken it's discovered early and made to work well enough. We use Eclipse as an IDE and it works pretty well with Jaws. I've used IntelliJ a bit and it's what I'd call barely usable. I am hoping it will continue to improve, the impetus for adding accessibility appears to have been Google switching from Eclipse to IntelliJ when coming out with Android studio. Hopefully Google will continue to insure accessibility improves. As far as JIRA goes I agree with you. I'd really like to hear from Atlassian why they can't display dashboards and issue lists using tables to provide any kind of semantic information. I've found your best bet with JIRA is to have someone sited help set up filters that display what you need. Export the results of the filter to Excel and you can brows a lot of issues quite quickly. I haven't used Gitlab but find Github to be fairly easy to use in the limited experience I have with it. I'm not particularly interested in building tools from scratch since I don't have a lot of free time but would be interested in trying anything that comes out of this.


I am working on a project to parse an image which then synthesizes an audio representation which retains all the information of the source image ... next step is to parse live video to enable people to hear what others see ... shoot me an email as well ... its not specific to dev tools yet could parley into a general enabler ... I am using a Hilbert Curve ... nice intro video at https://www.youtube.com/watch?v=DuiryHHTrjU


Wow, lots of potential here. Will definitely reach out.


How would seeing through sound be better than seeing through touch? Like with http://www.radiolab.org/story/seeing-tongues/


I'm not blind, but I would certainly like to contribute where I can. Shoot me an email (it's in my profile).

The world can certainly use more open source accessibility standards, protocols and tools.


I am not blind nor a programmer. But I do have serious eyesight problems and other handicaps. I also have moderating experience. Given the number of people saying "Shoot me an email" I have gone ahead and set up an email list via Google Groups called Blind Dev Works. If anyone wants to use that as a collaboration space, shoot me an email (talithamichele at gmail dot com) and I will send you an invitation.


In addition to this Google Group, I have also created the following GitHub organization: https://github.com/blind-dev

And have started a website to provide a resource for software development and accessibility: https://blinddev.org

Both are works in progress. If interested in contributing feel free to reach out, my email is in my profile.


Out of curiosity, what operating system and screen reader(s) are you using?

As a partially sighted developer, I generally use a screen reader for web browsing and email, but read the screen visually for my actual programming work. So I don't have significant first-hand experience with the accessibility (or lack thereof) of development tools. But some of my totally blind programmer friends have expressed some frustration with the accessibility of some tools, especially web applications. They generally use Windows with NVDA (http://www.nvaccess.org/). At least with NVDA, you can write add-ons (in Python) to help with specific applications and tasks.


Any chance you could start with an education component? I think most of us don't really know the nuances of a blind developer's workflow, especially which tooling breaks down where and if there is anything that is infeasible.


What would this look like?(no pun intended) Like a web site that documents accessibility issues for common web sites and software? Or a guide that walks through a typical development process for a blind dev?


Something like the latter. I think just a few blog posts would be very helpful.


I will suggest you start a blog somewhere and, initially, just kvetch about things that you find bothersome. If you can get any kind of feedback at all, that can help you figure out what your value position is. You only need one person really engaging you to make a big difference in your understanding of what other people most need from you.

The posts do not necessarily have to be long.


I think that's a great idea. I have no idea what development is like for a blind person, it would be very interesting to learn.


I think you are smart to consider your developer skills as a separate thing to improve. One way to objectively measure this might be to explain a technical concept to someone.

For example, could you read this article and then give an overview of the main issues of web site performance? Could you then come up with one recommendation for a performance improvement in a code base you're familiar with? Could you justify in practical terms why your recommendation was the best bang for the buck, vs. other other possible improvements? https://medium.baqend.com/building-a-shop-with-sub-second-pa...

Now, how do you judge yourself?

1). Have the conversation with a dev whose skills and opinion you trust.

2). Record your answers on audio, and ask someone on HN to give you fair and constructive feedback. Many here would be glad to do this (feel free to ping me as well).


I would very much like to make the D programming language dev tools work better for visually impaired users. Any suggestions you can give would be welcome, pull requests even more so!

dlang.org


Yes please! I'm not blind either, but would love to collaborate / contribute also. My email address is in my profile. Thanks.


Currently, I'm launching an app that is reading Slack messages out loud - Team Parrot http://teamparrot.artpi.net/ . Once launched, it will be open source (built in react native) If you think it's useful, I will welcome contributions.

I am not blind, but I designed it to operate without looking at screen. If the app will take off, I'm considering into forking/pivoting into RSS reader that also is not using screen. App is already accepted in the app store, I'm sorting out launch details.

Please accept my deepest apologies for the shitty job we (the developers ) are doing at providing interfaces for vision impaired.

Probably when we're all old, we'll have vision problems of our own :).


I am a completely blind developer and have been working on and off with code for about 20 years. When I started I was able to see well enough to work without the assistance of a magnifier or screen-reader, now I rely completely on VoiceOver and JAWS.

I too find frustration with some of the tools with which I work. Although they may slow me down, they seldom create complete barriers. Most of my work at this point in time is with PHP and Javascript, so this may help the situation, I am less familiar with the current state of affairs of the accessibility of developing with other languages.

All of the complaining I do about JIRA aside, I do find it to be a reasonably usable tool for what I need (page load times annoy me far more than accessibility issues). There are some tasks that I cannot complete (reordering backlog items), but I collaborate with team members, which can help us all to have better context about the rationale for changes.

Gitlab I find quite poorly accessible, but thankfully it is just a UI on top of an otherwise excellent tool (git). I find that the same trick that works with evaluating GitHub PRs works with Gitlab MRs. If you putt .diff after the URL to a PR or MR, you can see the raw output of the diff of the branches being compared.

Debuggers are definitely my biggest current pain point. I tend to use MacGDBp for PHP. This is quite reasonably accessible. It allows me to step through code, to see the value of all variables, and to understand the file / line number being executed. It isn't possible to see the exact line of code, so I need to have my code editor open and to track along.

I haven't found a very accessible Javascript debugger. For Javascript and DOM debugging I still find myself using Firebug. I use lots of console.log() statements, and would rather be able to set breakpoints and step through code execution. That being said, other than "does this look right?", I find there is little that being blind prevents me from doing with Javascript. As recently as last night I was squashing bugs in a React app that I am helping to build for one of my company's customers.

I'd be happy to learn more about any projects you take on to improve web application development tools and practices for persons with disabilities. Feel free to reach out on LinkedIn if you would like to talk.

https://www.linkedin.com/in/ezufelt


Legally blind developer here. I still have some vision in one eye and I make extensive use of it as far as being able to primarily code with screen magnification and some spoken text for select code all using OS X). I've had good success in my career but I will say I've had to at times work a lot harder to get the same results as fully sighted coworkers.

I'm mostly responding to encourage you to keep at it, and if you haven't tried Mac OS, maybe give it a whirl. Apple is pretty good about accessibility and their accessibility team is very good at accepting and acting upon feedback.


Guess it helps that apple does employ at least one blind engineer. [1]

[1]http://mashable.com/2016/07/10/apple-innovation-blind-engine...


FWIW i have it on good authority that Apple employs at least 4-5 blind developers on the accessibility team alone, possibly more elsewhere in the company.


That makes sense. Perhaps TS should try and get a job at one of the big companies like google, apple, facebook etcetera. Being blind and have programming experience might be an asset for working on an accessibility team like that.


Would it be helpful for a news site or blog to call out software that won't take easy steps to improve accessibility?

My sight issues are not comparable to being blind, but as an example, I've asked Pandora for simple accessibility improvements for years and they never take action. Have even offered to write (less than a page) the code for them.

Would they (and software tool vendors) feel the same way if this were highlighted on a high traffic web page?


There might be something here. I'm personally not a fan of shaming, but documenting where a piece of software falls short and making specific recommendations in a public forum might go a long way.


I have a tool I'm working on that is specifically geared towards assisting the transcription of printed books to braille for The Clovernook Center for the Blind [1] built on top of Atom as an extension.

A super-rudimentary basic version will be something I finish when I've the time in the coming months. I was hoping to get some interested from the blind community and get ideas for further OSS work involving that general space (editors).


FYI, Atom isn't particularly screen reader friendly.


I'm not blind but I have very poor eyesight in my left eye which makes reading tiring so I started this experimental Morse-based system https://github.com/Hendekagon/MorseCoder for the Apple trackpad. It's not very successful. What I really want is a fully haptic dynamically-textured surface.


Not a blind dev, and would love to help community as much as possible, though I don't even know where to start :( Would be really helpful to have centralized place which directs developers effort to valuable open source projects.

Another interesting idea: try using braille screen for ourselves, so we as dev's will be able to work at complete darkness without any light :)


I’m not blind either, but I too would be interested in contributing.

Send me an email; my address is in my profile.


One of the best (and most engaging) HN threads of the year from my perspective. So much better than "$STARTUP raised an n hundred million dollar series Q" on TechCrunch article.

Thanks @blinddev.


You're welcome. :)

I was feeling pretty down and wanted something positive to come out of it. Glad to see I'm not alone.


I am not blind but my daughter has albinism and low vision as a result. I have been looking to find ways to contribute and make it easier for her and others.

I would be happy to help.


I am not blind but might have the chance to offer you a dev job (related to blindness). Here's the product we are working on: http://horus.tech Just send me an email at saverio at horus dot tech if interested


I'm not sure what's the protocol for this kind of stuff (I'm not blind), but, your website being directed at blind people, wouldn't it be better to be set english as the main language?


It should pick English or Italian depending on your browser language. If it does not, then it's a bug. I'll look into it


Using Firefox 49.02 auto detect of language does not work. I'm using Jaws and assumed it was a bad language tag before I turned auto detect of language off.


I have my OS and browser set to English and am browsing from a Dutch IP address. I get the Italian page. Not sure which part of "Accept-Language: en-US,en" (which Firefox sends) is unclear.

Edit: oh there is an English button in the right top. Only saw it just now (a flag might be more colorful to spot).


I have a similar setup: OS and browser set to English from brazilian IP address. I got the italian page as default.

And I also took some time to find where to change to english.


Using Chrome, getting Italian.


It would only be better if they're targeting English-speaking blind people.



I am not blind, but would very much like to contribute. I have been developing tools and applications for a very long time now.

Do let me know how to contact you.


Dear blinddev,

I'm a seeing student with an upcoming six week block of time to do a out of school project. I have previous experience developing accessable software and would love to work with you. If you're interested, shoot me an email at eliaslit17@gmail.com


> I'm looking at you JIRA, GitLab, Chrome Dev Tools, etc.

I'm not sure that using tools that try to provide a good visual experience is the right approach. Have you tried writing scrapers that provide an optimized textual representation?


I disagree, I think we should all have the responsibility to make sure our sites and apps are accessible.


Yes and no.

Yes, we should all make our sites/apps accessible.

No, some sites/apps are great exactly because they offer a better visual representation that allows faster parsing of the information presented to the vast majority with adequate eyesight.

Just because someone develops a great way to visually take in information, that person should not be forced to develop an equally great textual representation.


I'd be interested in discussing this with you further offline. I'm not blind, but definitely interested in exploring ways to help. If you add some information to your profile about how to reach you I'll reach out.


I know you're asking for collaborators and not recommendations for tools, but since you were mentioning Chrome Devtools I wanted to make you aware of kite.el [1] which I believe TV Raman had working with Emacspeak [2] at one point. kite.el is unmaintained, but it might make for a good starting point?

[1] http://github.com/jscheid/kite [2] http://emacspeak.sourceforge.net/


I created a framework for the Amazon Echo. I've been curious if it could help the blind. My e-mail is John At John Wheeler Dot Org. I'd be willing to help you if you could use me.


I have an Echo and it is a life changer for a blind person. I've written a few trivial skills for it, but definitely would like to do more.


I am a 100% blind, (primarily) web application developer, and, main focus area at moment is PHP, MySQL, javascript/jQuery, with things like python as a form of side-line, and, main thing for me is finding right alternative tools to use - work on windows platform, using NVDA screen reader, which itself is done using python, and, while I use both WAMP and XAMPP on dev machine, I use a specific programmer's code/text editor, called edSharp, which was created by a blind dev himself, and, for things like DB management, I generally work with a web interface, from adminer.org, which can handle multiple DB platforms, but, on the program-l VI programmer's mailing list, there are a bunch of guys who work with a wide range of dev tools, for all sorts of platforms, and, there are probably a few guys there who might be interested in helping with this type of thing, but, for example, guys there work with things like eclipse, sodbeans, VS.Net, and, multiple other tools, at times with workarounds, etc., but, really just depends on your primary dev focus areas? Either way, check out http://www.freelists.org/list/program-l


https://news.ycombinator.com/item?id=12820985

FYI, I sent courtesy invitations to nine people who said in this discussion "shoot me an email." One email address provided here was invalid. One or two other people who said "email in profile" did not have an email in their profile. If you want an invitation, contact me (talithamichele at gmail etc).


There's a guy called Octavian Rasnita who's a blind perl developer who I've met before. We basically never agreed on anything but he always disagreed with me intelligently. You might be able to contact him by emailing user teddy on domain cpan.org. No idea if he's a good person to talk to or not tbh but I've always found him worth arguing with.

If you do contact him please blame me so he can shout at me, not you, if I made the wrong guess here.


I'd help, shoot me an email. (I'm not blind)


I'm part of the Blockly team (https://developers.google.com/blockly/), an open source project for visual drag-and-drop programming, usually targeting kids. Despite being a "visual programming editor" first, we are exploring blind accessible (i.e., screen reader ready) variants of our library.

See our first demo: https://blockly-demo.appspot.com/static/demos/accessible/ind...

Right now, it is effectively a different renderer for the same abstract syntax tree. We'd love to see people evaluate the direction we are currently going, and possibly apply the same accessible navigation to our existing render.

In terms of dev tools, Blockly blocks are usually constructed using Blockly (https://blockly-demo.appspot.com/static/demos/blockfactory/i...). That said, no one has considered what it would take to make our dev tools blind accessible. The fundamentals are there.

Granted, Blockly programming is far from being as powerful as other languages. It is aimed at novice programmers, whether for casual use or to teach the fundamentals of computational thinking. You can write an app in Blockly (http://appinventor.mit.edu/).

If anyone is interested, reach out to us: https://groups.google.com/forum/#!forum/blockly


My research is on dev tools so I have a huge interest in this. Also, I interned at Microsoft Research and got to meet a few of their blind devs and how they are building tools to support others.

There is also a fair amount of research out there on the topic (see Richard Ladner at UW).

Feel free to send me an email if you get anything going!


I'm making web apps and mobile apps and I'd like to learn how to make my apps more accessible. Is there a community where I can ask for assistance with accessibility testing? I'd be willing to pay for testing. Thank you kindly. Nanch.


Just in case you don't know this still, there is a latex package for braille. You can write a .tex file in english an put \usepackage{braille} in the preamble of your tex files and your output will be translated to braille automatically. The pdf can be then raised printed with the appropriate hardware. You could find it useful for documenting your software (tutorials, faqs, manuals, etc...), in both languages even if your collaborators don't know a single word in braille.

You will need to have the package texlive-fonts-extra installed.

You could want also to contact with the maintainers of brltty, cl-brlapi, ebook-speaker or brailleutils


I’m a programming language developer, and I’m interested in developing languages and programming environments for the blind and visually impaired, or at least making existing languages more usable. Feel free to get in touch.


Great initiative! Are you considering setting up a blog / github page / anything to keep track and coordinate the effort?

I'm asking because though I'd love to help I know I won't be able to commit to it full-time. So it would be great to be able to follow up and get an idea of where the project is going, what areas it is tackling, etc.

Also, maybe a "Show HN" could help spreading to a wider audience whatever you set up.


I have created both, still a work in progress, but once there's something to show a Show HN might be in order.

https://github.com/blind-dev https://blinddev.org


I wish you well and will help in any way that I can. ld



Thanks so much for posting this. I'm not blind or partially sighted, but I do work on one of the software tools that you mention.

I would love to learn more about how you would like development tools to support you in your work.

I know as an industry we have a long way to go, and I would love to work with you to get us there.

My email is in my profile, and I will also reach out to Talitha. Hoping we get a chance to chat.


Thanks, will reach out for sure.


If you develop frontend stuff, please get in touch at pavelkaroukin@gmail.com . Company I work at developing single page app for higher ed and we constantly struggle with proper practices to make UI to be accessible to people with bad sight. Who knows, maybe it will perfect opportunity for both you and my company.


Out of curiosity, what kind of developer are you? (e.g. languages you know, frameworks, platforms, etc.)


Hi blinddev,

You might want to read this: TOOLS of Blind Programmer https://www.parhamdoustdar.com/2016/04/03/tools-of-blind-pro...

Hope this can help.


I am planning to work by blinding myself (covering my eyes) everyday for half an hour - starting one of these days. Though I am not immediately planning to work on software for the blind, I might get some inspiration to do so in future with this exercise.


Not blind, but interested!


Have you heard of the BATS group and mailing list?

http://bats.fyi/


No, but I have now. Very cool.


I am sighted, but I am a developer (primarily web) and I'd like to help. Is there room for people like me?


Yes, shoot me an email (in my profile) and I can invite you to the Google Group.


Sighted person here. I'm very interested in this question. Most developers I know are not considering accessibility as part of the intrinsic design of an app. Blind people are more keenly aware of this problem, unfortunately, because it affects them a little more directly. Accessibility to screen reading clients is considered a "good to have," nonessential, an optimization. And yet when you ask the same people if search engine optimization is considered a "good to have," many will laugh and say no, that it is a necessity, if for no other reason than their clients demand it.

Clients want sites that implement current SEO best practices. What sort of best practices are those? A Yoast SEO plugin, maybe. Developers often mention the URL structure of the site itself, say it's "clean." This might be appreciated by future admins of the site, but it's unrelated to the goal of making pages that can be scraped.

It surprises me developers and SEOs overlook the difficulty of scraping the web. Keyword density does very little to help a page that cannot easily be serialized to a database. It's true that machines have come a long way. Google sees text loaded into the DOM dynamically, for example. But its algorithms remain deeply skeptical of ( or maybe just confused by ) pages I've made that make a lot of hot changes to the DOM.

And why wouldn't it be? I ask myself how would I cope with a succession of before and after states, identify conflicts, and merge all those objects into a cached image. Badly, sure. At this point, summarizing what the page "says" is no longer a deterministic function of the static page. Perhaps machine learning algorithms of the future will more and more resemble riveting court dramas where various mappings are proposed, compared to various procedural precedents, and rejected until a verdict is reached.

I wasn't very good at SEO. I found web scrapers completely fascinating, I spent way more time reading white papers on Google Research and trying to build a homemade facsimile of Google. Come to think of it, I did very little actual work. But I took a lot of useful lessons that have served me well as a developer.

I realized, for example, how many great websites there are that are utterly inaccessible to the visually impaired. With very few exceptions, these sites inhabit this sort of "gray web," unobservable to the vast majority of the world's eyeballs. The difficulty of crawling the web isn't simply related to the difficulty of summarizing a rich, interactive, content experience. They are instances of the same problem. If I really wanted to know how my site's SEO stacks up against the competition, I would not hire an SEO to tell me, I would hire a blind person.


I'd like to help.


> I'm looking at you Chrome Dev Tools...

Puns aside, Who on earth would make a blind person work on UI? I think it's better that you parted with them, even tho I'm sorry you have trouble finding a good job.

Best of luck.


I'm a blind programmer. I'm currently working on the Rust compiler [0] and a large library for 3D audio that is essentially desktop WebAudio [1]. I'm the kind of person who people often ask for help with their college classes because I went through everything without trouble and came out of college with a 3.9 GPA, and the only reason I'm not making significant amounts of money at the moment is that I have other health problems that I won't go into here (but I would trade with someone who is only blind in a heartbeat). I think I am qualified to say that this is a bad idea.

Firstly, just offhand, the following stacks should be fully accessible with current tools: Node.js, Rust, Python, truly cross-platform C++, Java, Scala, Ruby, PHP, Haskell, and Erlang. If you use any of these, you can work completely from a terminal, access servers via SSH through Cygwin or Bash for Windows, and do editing via an SFTP client (WinSCP works reasonably, at least with NVDA). Notepad++ also makes a perfectly adequate editor, again with NVDA; I'm not sure about jaws if you're using that.

GitHub has a command line tool called hub that can be used to do some things, and is otherwise pretty accessible. Not perfect, but certainly usable enough that NVDA (one of the most popular screen readers) uses it now. Many other project management systems have command line tools as well. If you write alternatives to project management tools, you will have to convince your employer to use them. Replacing these makes you less employable. You need to work to make them accessible, perhaps by getting a job on an accessibility team.

The stacks you are locked out of--primarily anything Microsoft and anything iOS--can only be fixed with collaboration from the companies backing them. Writing a wrapper or alternative to msbuild that can let you do a UWP app without using the GUI is not feasible. I have looked into this. Doing this for Xcode is even worse, because Xcode is a complicated monster that doesn't bother to document anything--Microsoft doesn't document much, but at least gives you some.

I imagine this is not what you want to hear, but separating all the blind people into the corner and requiring custom tools for everything will just put us all out of work. if you're successful, none of the mainstream stuff that cares even a little right now will continue to do so, and you'll end up working on blind-person-only teams at blind-person-only companies.

0: My most notable Rust PR is this monster: https://github.com/rust-lang/rust/pull/36151 1: https://github.com/camlorn/libaudioverse


If this is directed at me, I don't think I'm arguing to put blind developers in a corner. The tools that could be developed could either be new or improvements on existing tools. The latter sounds like your preference, but I don't think it is the only viable option. I can envision a whole suite of tools that, while targeting blind developers, could enhance their abilities or even just make what was otherwise inaccessible accessible.

A possible analogy might be crossing a busy intersection. Someone made pedestrian cross signs audible, allowing a pedestrian to know what the walk sign says at any given time. But this is an enhancement of a pre-existing technology. That blind pedestrian will still likely require a white cane, a blind person specific tool, in order to cross the street. I think there's room for both in software development.


I'm almost completely blind in one due to an eye condition and would be interested in being involved.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: