Hacker News new | past | comments | ask | show | jobs | submit login
The JSMESS Sound Emergency (textfiles.com)
108 points by joepie91_ on July 17, 2014 | hide | past | favorite | 15 comments



Everyone please help Jason Scott and Archive.org and JMESS. Spend some time and help them out. #archiveteam on EFnet is a good place to start. They're a great organization, and JMESS is kinda a lynchpin to the future of the software preservation efforts there.


All the browser-based emulators I know (admittedly not many) are using the XAudioJS wrapper library for all their sample-based audio generation needs:

https://github.com/grantgalitz/XAudioJS


> And Opera is dead – it’s essentially a reskinned Firefox.

Opera is not a reskinned Firefox. It uses Chromium as the base, not Gecko.


The whole of chromium or just blink?


Sorry, was tired. I fixed that and a couple other typos. Thanks for reading, everyone.


I've experimented with adding sound to my TRS-80 emulator. It seems like the only workable solution is to use the ScriptProcessorNode callback as the sole source to timing and run the emulator from there. This will still jitter if Javascript is too slow. But if the emulator is too slow, then, there is nothing you can do except report it to the user.

The graphics are done on an ordinary timer which redraws based on the cached video state from the audio callback. JSMESS could probably do the same.

You'll always have a serious problem when there is more than one timer. If I could change the APIs, I'd add a requestAudioVideoFrame() timer callback where you would supply both the video (rendering) and the audio buffer for a given segment of time. That would also give the browser more control over the situation.


The real problem with Web Audio for emulators and other software mixing scenarios is just that the API is broken by design. There are a bunch of known issues with software mixing/audio rendering in the API, especially if you want low latencies, and there wasn't any opportunity for the community to raise them and get them fixed before Google punted their beta version out onto the web and effectively froze it indefinitely.

On the bright side, the whatwg committee working on Web Audio seems to want to solve this problem, but they don't seem to even have a hypothetical fix yet, so odds are it won't be fixed for a while unless one of the browser vendors decides to fix it themselves and then try to push the fix through the committee.

At present there are two main ways to do playback of synthesized audio: one is to use a ScriptProcessorNode, the other is to sequentially schedule lots of small AudioBufferSourceNodes containing your audio.

At present, ScriptProcessorNode is unreliable for this scenario because it runs on the main thread and (IIRC) only buffers a single chunk at a time. This means that if a GC pause or other glitch causes your user JS to stop running, the audio will drop out or start looping.

Queuing up AudioBufferSourceNodes is a good solution in theory but has lots of implementation/spec issues at present. One issue is that unless your mixing rate matches the sampling rate of the Web Audio implementation (unspecified and can vary across machines and browsers! have fun with that.), you get glitches at the boundaries between your buffers due to how they do the resampling. IIRC some implementations also have issues with the exact precision of scheduling where you can get gaps in playback. You also deal with some race conditions here where your user JS can be racing against the mixer thread and that gets pretty hairy.

I think if you're willing to tolerate extreme latency (like, mixing 200ms in advance) and can mix at the native sampling rate of the Web Audio implementation, you can more or less do software mixing like JSMESS wants to right now. Of course, I don't think anyone would tolerate 200ms of audio latency for something like an emulator. :-(

One future path for this would be to run your whole emulator on a JS worker, and then run the audio core on a separate worker. The audio core worker would be isolated from most GC pauses and it could periodically schedule the appropriate audio events.

At present as far as I know you can't offload Web Audio trivially to JS workers, though the committee had said sometime last year that they planned to add threading support. IIRC AudioContext is only accessible on the UI thread, so you are vulnerable to GC pauses no matter how you actually feed samples to the mixer because you have to run code on the main thread to actually get the samples there.


I'm not sure if offloading sound to a seperate worker would be possible for something like JSMESS -- most emulators I've looked at are single-threaded save for rendering operations in certain safe cases, and sound generation is often inexorably linked with the CPU loop.

Maybe running the entire emulator on a worker would help (assuming it's GC-free)? But now you've got potential issues with input and video latency.


Nice explanation. I had some similar problems with my project found here: http://simulationcorner.net/Sidplayer/index.html

In the end I had to use the queuing of AudioBufferSourceNodes as explained. In Chrome I could use the loop function and refill the loop buffer myself with a timer. Unfortunately this didn't work under Firefox.

A worker thread for the whole emulation might indeed help, but you have to copy the buffer via messages to the master, which also produces a lot of timing problems. Additional, copying the whole screen buffer with 25fps slows everything down.


Transferable buffers solve the copying issue, but I don't know if they are usable for your scenario yet.


Thanks, found a helpful link: http://stackoverflow.com/questions/16071211/using-transferab...

Is this a standard, that you can share a buffer? Will it work in all browsers?


I don't know if it's fully standardized or works everywhere, but it's at the very least in the standardization progress. I'm pretty sure at least two browsers implement it.


Internet Explorer doesn't have Web Audio yet, but Chrome and Safari (and Opera) should work. If anyone reading this is a dev from those browsers familiar with audio, JSMESS could really use some help here.


Both tests work, but there are issues with glitching and tempo. Seems also that pretty small audio buffers are being used due to the sound of the glitches.

Possible future workarounds for JS's lack of realtime guarantees for low-latency apps were discussed recently: https://news.ycombinator.com/item?id=7906674

But I think for now the only solution is "use bigger buffers" (or an easy way for users to experiment with the latency/safety tradeoff knob)


I'm on latest OS X, latest Safari (non-beta), and I tried latest FireFox (non-beta).

The 'good' test works quite well on Safari (except when scrolling), the 'tough' test doesn't run. Both tests work equally well on FireFox, but only 'OK'.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: