Hacker News new | past | comments | ask | show | jobs | submit login
Introducing choir.io (corte.si)
154 points by tomazmuraus on Aug 15, 2013 | hide | past | favorite | 40 comments



This is an excellent project, congrats.

However, it is in no sense "new or unique" as the authors suggest. Extensive (20+ years) of research literature on data sonification is out there, so...

http://www.icad.org/knowledgebase

Note also (very many) art-led sonification projects, turning everything from live IP traffic to gene-sequence or x-ray astronomy datasets, carried out since the early 90s. Prix Ars Electronica may be a good place to look for these.

My summary of the field in general, FWIW, is this - it's trivial to turn a realtime data stream into sound. It's slightly harder to turn the stream into either a) music or b) non-dissonant sound streams, and it's very hard indeed to create a legible (ie useful, reversible) general purpose sonification framework, because auditory discrimination abilities vary so widely from individual to individual and are highly context-dependent.

Of course, because sound exists in time not space, there's no simple back-comparison of data with and relative to itself, as when one looks at a visual graph. Listeners rely on shaky old human memory: did I hear that before? Was it lower, louder? And so on.

That said, I remain fascinated by the area and propose that a sonic markup language for the web would be interesting.

Sneaky plug: My current project (http://chirp.io) began by looking at 'ambient alerts' until we reached the point above, and decided to put machine-readable data into sound, instead of attaching sound to data for humans.

Good luck, and I very much look forward to hearing more!


I certainly don't want to give the impression that we're unaware of the long history of auditory display projects. In fact, it's my reading in these areas that resulted in this project. Keep an eye on my blog for some posts delving into our antecedents and the research around sound perception soon.

That said, what we're trying to do specifically - which is sonification as a service, and trying to adequately cover a very wide range of different use cases and sound sources at once - is probably new. I don't think that matters much, though, and "newness" is the least interesting aspect of the project.


Agreed - and a wide range of services and cases is what makes it difficult/interesting. Looking forward to more.


Very interesting.

Watching log files scroll by I have noticed that once you have stared at them for too long you start recognizing the patterns. There's not enough time you read everything that scrolls by you quite often you just know that now something is out of place.

Maybe these soundscapes could provide something similar in a non obtrusive way. Just by listening your brain would be wired to expect certain sounds as consequence for certain actions. If something goes wrong, you would just know it.

I think one challenges is how to take something like this into use. Setting up the the triggers and configuring the sounds feels like too much trouble ("What is the correct sound for this event"). Might be just better to take some ready provided set and learn the sounds.


Right, this is exactly the kind of thing we're thinking about now. We already have integration with a bunch of services (GitHub, BitBucket, etc. etc.). At the moment, you still have to pick sounds for these on signup, but very soon we'll have flexible, sane defaults. Once this happens we will have onboarding in a few minutes for all of our pre-rolled integrations.


So I’ve been listening to the demo in the background for a bit now. And I think it does convey info in a non-intrusive way, though I’d imagine it’ll take a long while to know exactly what’s going on just by listening.

It seems like the big trick when implementing an app on top of this is appropriately assigning the "level" of the event. Every time the Alarm or Horn goes off it’s fairly intrusive.

Regardless an awesome, uniqueº and useful service.

--

º In my experience.


This is great stuff, congrats on launching.

Instead of simply generating a fixed sound for each event, have you considered synthesizing a continuous multi-track score? Like a baseline piece of orchestra music being modulated by the events. Or something like Brian Eno's http://en.wikipedia.org/wiki/Generative_music

Also, perhaps consider streams of data other than discrete events: perhaps continuous metrics like CPU utilization, or stack traces from profiles, or percentiles of latency, or ...


Yes, absolutely. My initial idea was to synthesise all the audio ourselves, using a native desktop client and libpd. Then I realised that we could test 95% of the idea using web audio and pre-compiled sound snippets. And here we are.

Continuous data is one of the very next things we are implementing, partly because a sufficiently dense discrete set of events becomes a frequency, partly to cater for measurements like load. We plan to indicate magnitude with pitch and volume, but there are some complexities in the API and representation that we're working through first.


Ok, I'm going to say it -- I don't get it.

The problem I see from the GitHub demo and the discussion here is that you zone out of the "background noise" and focus on the important/out-of-bound/etc sounds. Great, so why not just remove the background sounds and just alert the user to an urgent notification. There's nothing new to this, however, this is just audible notification alerts.

If you are going to run the sounds in the background, your brain is going to process out the on-going "normal" sound anyway.


Because the bet is that your brain is much superior at figuring out what is important/aberrant and what is normal as compared to the best algorithm you design.

E.g. if you know a co-worker is on holidays then a dearth of checkin sounds will seem normal (ideally this feeling of normalcy will be subconscious). On a different day, a dearth of checkin sounds might alert you that your startup isn't making much coding progress (depending on context not necessarily a bad thing!).


Book on the subject: Gregory Kramer - "Auditory Display - Sonification, Audification and Auditory Interfaces"


Thanks, ordered. There's a substantial body of research in this area, and we're just becoming conversant with it now.


A friend of mine does sonification research and crosses it with musical composition... very approachable NPR story on it: http://www.npr.org/2013/02/10/171639280/want-to-create-a-spa...


For something a little more recent, take a look at http://sonification.de/handbook/ and http://sonification.de in general


I was just thinking about this today. How it would be nice to be able to listen to the pulse of our analytics.

I had the pleasure of meeting the Mailbox app crew at Dropbox's offices a few months ago. They had a really cool light show on what looked like a table tennis net strung up with networked LEDs and pasted to the wall. When a user signed up, it would create a blue pattern across the net. When a message was sent, the screen flashed red. You can imagine the screen was a dancing symphony of visually encoded events -- it was and really remarkable and quite beautiful to watch. Chaotic at first, but once you memorized the patterns you could glance at the screen and immediately feel the pulse of the application. After a few hours I think you'd almost be in touch with the application where you could recognize errors without even having to check your logs / analytics / etc...

So @cortesi, definitely build in a hook for the Mixpanel API. It'd be great to get a sound everytime a user signs up, signs in, or triggers certain events.

I can imagine all the SF startup folks walking around the mission with boomboxes on their shoulders networked to pick up their audio feed from Choir.io, broadcasting their own encoded analytics melody to the world. Or PMs with headphones on at their spin class, keeping up with their engineers' progress on the new sprint. Ok yes, I'm mocking the movement now, but it's still pretty cool, congrats =)


I've added Mixpanel to our list of proposed integrations. We'll get round to all of them in time. That feeling of being in touch with your applications and servers that you experienced at Dropbox is _exactly_ what we're aiming at. I'll pass on the hipsters with boomboxes, though. ;)

We're collecting integration ideas over here:

http://choir.uservoice.com/forums/217059-general/category/70...

If there's anything else on your wishlist, just chuck it in tnere.


I really dig the idea and can see the coolness in hearing analytical data, but is it just me or is the Github real-time demo super annoying? First couple of minutes were okay, but in this instance where there is a constant flow of data playing sounds, it gets really old, really quick.

No doubt a super cool and out-of-the-box idea, but I quite personally would go crazy if I had to hear water droplet sounds any longer than an hour.


It's not just you - the GitHub demo is a tad annoying. The idea was that we might have people's attention for 15 seconds on average, and we wanted them to get a feel for the full spectrum of sounds in that time. So, it's tuned to be much more intrusive than a production stream might be.

Also, whether any particular sound is annoying or pleasant is complicated (we're just figuring out the parameters now) and subjective. So, we're working on letting users create, edit and share sound packs to see what smarter and more talented folks than us come up with.


How about applying real-time effects to the audio? Nothing heavy, but just so they're rarely or never exactly alike? Preferably adjusting the parameters depending on something to do with the event (for example, make the "starred" sound less high pass filtered the more projects the user doing the starring has)... and then add the ability to define and control said audio filters to sound packs. Of course this would increase client requirements by a lot, so maybe it would need to be optional. And/Or you could simply pre-render a bunch of variations and mix between two of them for each playback for more variety at next to no cost.


When we began, the idea was to synthesise everything, and have many perceptual variables to play with. Then we realised that we could validate the idea using pre-rendered sound snippets. We'll turn back to something more sophisticated once we start adding interfaces for levels and continuous measurement, and I'm sure that will wash through to more flexibility in discrete sounds as well.


i think, as I said above in the thread, something you could really benefit from is some quality reverbs, both to enhance the spatial image (different sounds would be in different locations of this virtual space, and thus get different reverb tails), as well as the fact that reverbs tend to push things into the background, wash them out a bit, and hopefully sort of mush/glue together sounds when they occur in short repetitive succession.

i just had another idea, and you just have to try this and see if it works or what it does: the exact millisecond when an event occurs is not that important. I think you can safely shift an event 20-50ms in time, without losing important accuracy. You could use this leeway to space similar events apart slightly, so they don't give that glitchy "retrigger" sound effect as much. It'd take some code, but you could even buffer all events for say 200ms, and use that sliding window to space them out as evenly as possible. I don't know if it would improve things, because it's really not what happens in physical ambient sounds, but it seems to me like it would smoothen things up a bit more. You could give it a try.


I think it's badass!. Can I hear what it would sound like in a production environment? Maybe just record like an hour or something like that so I can get a good feel for what it would sound like. Github got too annoying after a little bit. But yeah, I really really want to hear what it sounds like for real.


> How do we construct soundscapes that blend into the background like natural sounds do?

wetter reverbs, in particular the late reflections are pretty strong with far-away background noises. maybe even stronger than the original sound itself (though I'm not sure if that makes physical sense, it's easy to do with a regular reverb effect, and really muffles the sound into background)

also something with the stereo image.

if funds allow, ask a professional sound mastering studio, maybe? there's people that might know just the tricks.

oh and if you want to place the sound in the room, bury in the other ambient sounds, tell the users they really need somewhat decent speakers, not plastic desktop speakers and definitely not headphones (even if they're really good headphones).


btw I forgot to mention, this is awesome, I'm loving it, and it actually sounds good already.

By which I mean, I've heard/experienced a couple of other data sonification / generative art installations with a similar concept, and they didn't sound "right", in some sense. Maybe part of it is that the github events seem more "useful" or "natural" than whatever it was (I forgot) those other installations were um sonificating.

But an important part of it is, I think you already chose a couple of really nice and varying sounds. Ones that stand out in the spectrum, and sound good both on their own as well as when they're repeated in quick succession (though for that last situation, some kind of real time synthesis using oscillators would maybe provide a smoother sound)

anyway, well done!


Broken in Firefox 23 on OSX

Error log:

  Blocked loading mixed active content "http://api.choir.io/stream/f9c750f2bedb0c0f" @ https://choir.io/static/media/lib.967f1395.js:8671


Visiting the demo via http instead of https works. (Though ofcourse they should fix that)


Fix will be inbound shortly. We've had a bit of an unexpected bump in traffic today, and we're waiting for our moment in the sun to fade before we revamp things.


Very interesting and highly creative. A few thoughts.

1) If a graphical plot turns data into something visual, an audio "plot" turns data into something audible. Your output is an audio file rather than an image or video file. The typical applications of this are to turn a boolean flag into a chime (e.g. text message received). Your important insight is that this can be extended to longer-form audio outputs.

2) When is audio more advantageous than image or video?

  - When you cannot look at a screen (driving, working out)
  - When there are too many screens (control room)
  - In a very dark environment where visibility is impeded
  - If you are blind or vision-impaired
This could find real application in cockpits/control rooms, to ensure that a pilot is perceiving data even if they aren't looking at a particular dial. It could also be useful for various fitness and health apps that don't need you to look at the screen all the time.

Perhaps the most interesting application would be in a car, which is where people spend a great deal of time and have their ears and brains (but not their eyes) free. Some ideas:

a) Could you generate different sounds based on the importance of a text message (doing something like Gmail's importance filtering) signaling that you don't really need to respond to this particular message right now while driving?

b) Could you have audio feedback for important things along the road? For example, the problem with the Trapster app (trapster.com) is that I need to look at the phone to see where the speedtraps are. You can imagine an integrated audio feed that could give information like this and also tell you your constantly updated ETA (via Google Maps API call). Or you could listen to the pulse of your company on the road to do something semi-useful, and drill down into notable events via voice.

c) The really interesting thing is if you could pair this with a set of defined voice control commands. As motivation: an audible plot can't be backtracked like a visual plot. With a visual plot your eyes can just scan back to the left. To scan back and re-heard the sound you just heard requires rewinding and replaying. But it could be interesting to set up a small set of voice commands that allow not just rewinding, but rewinding and zooming. So you hear an important "BEEP" and you want to say something like "STOP. ZOOM" and set up the heuristics such that this identifies the right BEEP and then gives an audio drill-down of exactly what that BEEP represented.

d) Done right, you might be able to turn a subset of webservices into a sort of voice-controlled data radio for the road. People spend thousands of hours in their cars so it's a real opportunity.


Cool ideas here. I imagine both Google and the military would be interested in building auditory feedback systems into their vehicles/control centers, if they aren't already doing it.

What I think would be a useful addition would be transforming 'levels' (as opposed to events) to ambient, continuously-playing audio. This is pretty much the "dynamic audio" of computer games.

For example, you could have strings playing according to CPU activity: softly and slowly (think double basses) when activity is low, but more loudly and urgently (cellos) when activity is high. That would create a sense of how busy the server is (if you enable the CPU activity 'channel').

edit: I see cortesi has already mentioned they're working on transforming continuous data now - good job.


Have a look at my co-conspirator's blog post about Choir:

http://alexdong.com/choir-dot-io-explained/

We definitely see Choir fitting in where you can't look at or interact with a screen. Cars and wearable computing are areas we're excited about. First, though, we want to experiment on the desktop, find out what makes a good audio interface, and solve our own burning needs regarding more mundane monitoring situations.


Interesting project (not sure if I would really like it as a service, though -- I think I would personally prefer a library).

Either way, looks like your signup-form has some peculiar ideas about what constitutes an email address, it keeps asking me to input an email address when I type in:

  choir.io@s.hypertekst.net


The sound in the demo doesn't seem to work for me, on Chromium 28.0.1500.71 running on Ubuntu 13.10.

This looks awesome - I been wanting to setup something similar in our office that makes a sound every time a sale is made for some time now, so this can be pretty handy.


Awesome application of sound sonification[1]. Generate ambient sound based on data is hard, hope they can nail it.

[1] http://en.wikipedia.org/wiki/Sonification


Watching the github realtime activity with sound was mesmerising. I spent at least fifteen minutes listening to it.

You mentioned there will be Windows and OSX standalone clients coming soon. Will there be an API for writing clients?


I just get a flat tone in Safari 6.1 on OS X 10.8.4. Looks interesting though!


I love this and know what I'll be doing all day tomorrow at work!


I really like it, however the demo makes me wanna pee a little bit ;)


Is it going to be open source?


THIS IS SO AWESOME!

https://choir.io/player/f9c750f2bedb0c0f

Been listening to this for awhile now. Love it. Can't wait for a standalone client. Do you have a mailing list? I'd love to keep track of an ongoing feature list of sorts.


Please drop your email into the invites queue. We're working hard on Choir and will have a blog with new features up and running soon. When that happens, we'll drop a line to everyone in the queue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: