Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Whatever happened to dedicated sound cards?
220 points by Dracophoenix on Sept 1, 2022 | hide | past | favorite | 228 comments
During the '90s and the early '00s, dedicated soundcards were in-demand components in much the same way GPUs are today. From what I know, Creative won, on-board sound became good enough sometime between Windows XP and Windows 7, and the audio enthusiasts moved on to external DACs and $2000 headphones. Today Creative still sells soundcards, but none of them appear to be substantial improvements over previous models.

So what other reasons could have caused the decline in interest? Was there nothing that could be improved upon? Were there improvements on the software side that made hardware redundant and/or useless? Is there any other company besides Creative, however large or small, still holding the torch for innovating in this space?




The main reason for their death in my opinion, is the DRM-driven (although MS claim it wasn't because of DRM) changes to Windows drivers rules.

When DVDs and HDMI were becoming popular, and Windows Vista was launched, a lot of restrictions were put on drivers, I saw many people defending them claiming it was for better stability, avoiding blue screens and so on.

But a major thing the restrictions did, was restrain several of the sound cards features, most notably their 3D audio calculations that were then just starting to take off, people were making 3D audio APIs that intentionally mirrored 3D graphics API with the idea you would have both a GPU and a 3D audio processor, and you would have games where the audio was calculated with reflections, refractions and diffractions...

After that, the only use of sound cards became what the drivers still allowed you to do, that was mostly play sampled audio, so sound cards became kinda pointless.

Gone are the days of 3D audio chips, or having sound cards full of synthethizers that could create new audio on the fly.

Yamaha still manufactures sound card chips, and their current ones have way less features than the ones that they made during the sound card era.

EDIT: also forgot to point out the same restrictions kinda killed analog video too, for example before the restrictions nothing prevented people from sending arbitrary data to analog monitors, so you could have monitors with non-standard resolutions, non-square pixels, unusual bit depths (for example SGI made some monitors that happily accepted 48 bits of color) or not even having pixels at all (think vectrex) and so on. All this died and in a sense also affected video development, some features that video cards were getting at the time were removed and hardware design moved to a narrower path, more compatible with MS rules.

As for what the restrictions have to do with DRM: the point was not allow people to intercept audio and video using analog signals with perfect quality, since this would be an easy way to go around the DRM built-in on HDMI cables.


This is nonsense. The main reason behind the demise of dedicated sound cards: motherboard sound chipsets got "good enough". The value add wasn't adding enough value any more because you can get decent sound quality just by using the default sound output provided by your motherboard.

3D sound and other processing got baked into middleware for games because it became trivial to do all of the processing in software - and the processing became more advanced than anything that the sound card vendors were offering (and they didn't move quickly enough anyway).

Pro audio vastly progressed past anything that is possible to provide in fixed silicon. For input, dedicated USB (and ethernet) audio interfaces progressed to the point where it would be ridiculous to provide such functionality on a general "sound card".

It's just evolution - there just isn't a compelling enough niche for a dedicated sound card any more.


This is the answer. The only people buying dedicated sound cards these days are those doing audio engineering or production work, needing access to dedicated inputs and interfaces. Motherboard sound chipsets cover nearly every other use case.


Correct. Same thing has happened with GPUs. The vast majority of general purpose computers sold today come with integrated graphics. Only those who have unusually heavy 3D graphics needs, like CAD or the latest games at full quality, still buy a discrete video card.


[flagged]


I know bashing crypto currencies is well established on HN, but can we please not mention this every time.

This gets annoying.


To add to this - I have a dedicated sound card on my desktop - it lives inside the USB tiny dongle of my gaming headset and makes it emulate surround sound a little bit better. My two tinny tiny speakers are connected to the onboard audio output. Anything I watch, I watch on the TV, or via a bluetooth headset on the phone or tablet. Anything I listen to, I listen to on the phone via aforementioned bluetooth headset, or the nice big non-mobile bluetooth speaker.

I USED to have two powerful and rather higher quality speakers attached to a creative card back in the day when I did all that with the PC though.


>Gone are the days of 3D audio chips, or having sound cards full of synthethizers that could create new audio on the fly.

Modern CPUs can ether do or emulate this, probably using less power than a sound card.

Very, very, few people have their PCs connected to an AV receiver or multichannel speakers, but positional audio is still widely supported in Windows applications using Xaudio2.

The reasons sound cards went away is the use cases went away:

1. People who want high quality recording shifted to firewire and later high-speed USB external audio interfaces. No matter how hard you try an external metal box with multiple inputs and outputs will always be better than a PCI/PCIe card inside a PC for recording. Rare use case in the recording world for sound cards.

2. Gamers who want 3d/positional audio either use headphones, find the 5.1 integrated outputs to be adequate, or like me, run a digital audio cable to a surround sound receiver. Rare use case in the gaming world for sound cards.

Dolby Atmos is awesome for positional audio in games but there are multiple less expensive and more accessible methods for surround audio nowadays. Decent positional audio can be experienced using a laptop and headphones-- no sound card required.

https://www.pcgamingwiki.com/wiki/Glossary:Surround_sound

Back in the sound card days you had to squint on the back of the box and ask "is this creative 3d? aureal?" nowadays you just plug in 5.1 to your PC's onboard audio, tell windows you have 5.1, and it works (mostly).


No matter how hard you try an external metal box with multiple inputs and outputs will always be better than a PCI/PCIe card inside a PC for recording

USB can't offer as low latency as a piece of well-designed hardware plugged directly into your PCI bus, at least in my own limited experience. This comes into play when doing music keyboard recording.

eg. I found it difficult to find a USB MIDI adapter that didn't introduce unacceptable latency (when trying to record new tracks synced in real time to existing ones). Edirol was recommended to me but even after tweaking settings for hours it fell short. I wound up buying a second-hand Creative X-Fi Elite Pro PCIe card and love it.


The latency for USB3 is ~30 μS.

I don't think it's a USB protocol problem but rather a driver/manufacturer problem.


If I recall correctly MIDI itself had a typical latency several milliseconds on classic-era dedicated hardware.


Usually the software using the interface (pro tools / ableton) has settings to tweak audio latency via buffer size for audio. I have not had issue with this or midi, and I record a fair amount. Motu makes a good cheap audio usb-c interface.


> USB can't offer as low latency as a piece of well-designed hardware plugged directly into your PCI bus, at least in my own limited experience.

This may be true but I've never had latency issues with USB soundcards. Right now I have a Line6 Helix Floor unit that I use to play guitar with. I can route the audio through the Helix effects, into Logic and back to Helix for more post-processing and have no latency problems.

I have had other brands and models and none introduced perceivable latency.

I don't use MIDI but I doubt it requires less latency than live guitar playing.

I had a PCIe soundcard a few years ago that made it almost impossible to get rid of ground hum though.


I also have latency issues with USB soundcards and MIDI devices, specifically. Tried multiple vendors and in each one introduced ~100-200 milliseconds of delay. The old PCI SoundBlaster Audigy card I had 20 years ago with a the standard DIN MIDI interface was orders of magnitude better, even running Windows XP.


For audio is that via Directsound (if that's still a thing), ASIO or what?

Last time I did audio, installing ASIO4ALL was essential.


MIDI latency is a pain in the ass, audio latency is negligible with any USB soundcard I've used, buffer sizes and latency compensation in DAWs matter. Adjusting your MIDI timing to compensate the jittery of MIDI on a PC is also important.

Or get an external MIDI clock like the E-RM Multiclock and never bother about MIDI latency issues. Audio is completely fine, 30-50μs of latency won't ever be perceptible to humans, the latency of MIDI will.


This is emphatically not true for USB interfaces in general.

In particular, the first readily available set of benchmark results I was able to find[1] suggest the difference in audio (not MIDI) latency between the lowest-latency PCIe cards and the listed USB devices (RME Fireface UFX+ USB3 and RME Babyface Pro [USB2]) is under a millisecond.

While I couldn't find similar results for MIDI I/O, it seems unlikely that either of these USB devices' MIDI interfaces would introduce an order of magnitude more latency than their audio counterparts.

[1] https://gearspace.com/board/showpost.php?p=15796206&postcoun...


USB _is_ a well-designed hardware plugged directly into your PCI bus.



Wait until you see the how many attacks can be carried out using Internet Protocol.


I'm also not the one making the claim that its well designed either.


USB is purely pooled by the host, no Bus Mastering, no DMA. Bunch of frequent interrupts (125us uSOF) that need to be handled by the CPU. USB 2.0 is so heavy its mere presence in a computer (idle pooling something plugged in) visibly slows down any <1GHz computer (halving IDE transfers for example https://www.vogons.org/viewtopic.php?t=89651). Users didnt notice because 2.0 started showing up in 2002 together with >2GHz CPUs.


I don't know. I didn't think a mere audio interface plus soundfonts was an adequate replacement for a really good soundcard like the Yamaha SW100XG: https://www.musicradar.com/news/blast-from-the-past-yamaha-s... .

Then there's Korg's Oasys PCI, which was so powerful that for a long time people kept using Windows 98 after Korg stopped making drivers: https://www.soundonsound.com/reviews/korg-oasys-pci


For playing older games that relied on that hardware, sure soundfonts aren't a great replacement. But modern games moved away from needing to use a soundcard audio engine and are just able to completely do it on the CPU, and the only real benefit of the soundcard at that point is the latency/dac/amp


I have sub 5ms latency with my RME sound card, could probably go lower. What will hurt after a long while is the bandwidth at 192khz plus some other protocol on top (clock syncing, midi…). But we’re speaking of more than 64 channel in AND out.

So USB is pretty good, and for most sound card, USB2 is enough. Otherwise you can go Thunderbolt, which offer on par experience with PCIe.

What a consumer sound card offer now a day is better dac compare to the one of your motherboard or better output for headset.


64 channels I/O at 192khz over USB2? That's insane. Isn't USB2's bandwith 60 MB/sec?

Ok, I just checked, and a 192kHz 24-bit WAV file is only 0.56 MB per sec. Nice.


Higher sample rate = less delay, too.


Not when talking USB. You have strict upper limit for pooling interval (1ms/125us)


Not for free though, more cpu.


> Very, very, few people have their PCs connected to an AV receiver or multichannel speakers

...in part because there's no way to do that, and if you do it by using the headphone jack, in addition to low quality you're also going to get all system sounds


Maybe I am self selecting, but I don't think I have seen a desktop computer or motherboard in the last 15 years without spidf over toslink or RCA. Hell, for that matter a bunch of laptops and even apple until recently included mini toslink/optical out the headphone jack.


Or just use HDMI, doesn’t even need to have a display device for audio to work.


I've got an ancient Yamaha 5.1 receiver with no HDMI. I'm sending it audio from a Raspberry Pi4 behind my screen through a cheap USB 7.1 audio card using regular RCA connectors. The extra 2 channels are duplicates of the stereo input (using pulseaudio) and get sent to a stereo amp that goes to 4 ceiling speakers in the adjoining room. I've found that far more reliable than spdif. For example, I can download all of the Dolby test files (including their latest Atmos stuff) and I get 5.1 audio from my old receiver. Using spdif I don't.


I can tell that analog-only motherboard audio is very common even if it doesn't make sense. This was the biggest filter when I was selecting my motherboard, limited the available options just to a handful (within reasonably priced boards, expensive top end of course has all bells and whistles).


> spidf over toslink or RCA

I don't think most people know what spidf toslink or RCA even are.


Motherboard on my PC has optical output and it is connected to external amp that is connected to 2 audio monitors and the sub.


HDMI or spdif


This might be due to computers and headphones becoming portable. When I was a kid my PC had a soundblaster connected to a hacky 7.1 setup in my room, and counterstrike supported it.


This seems very off base from my recollection and the state of tech availability at the time. The "analog hole" was around for long after the release of Vista, with Vista maintaining support for direct multi-channel analog audio out as well as VGA/component video out at HD resolutions, but that was not a hugely mainstream thing because going analog meant by definition non-perfect results - decoding a digital stream, sending analog, then re-encoding on capture. They started laying the groundwork, but 2007 PCs and laptops didn't commonly have digital output, so playing a DVD over VGA, for instance, was extremely common still and allowed.

And beyond that, this is the first I'm seeing a claim that "doing 3d audio calculations" was restricted and that this had anything to do with intercepting pre-encoded multi-channel DVD/digital media streams. They seem completely separate from each other as far as technical pipelines go.

Sound cards as a general consumer product were dead long before Vista. The last hurrah I remember was the SBLive!/Turtle Beach Santa Cruz era 1998-2001 stuff, Vista didn't come out until 2007 (Longhorn was famously botched, etc...).

CPUs just got fast enough that all of that, including 3d calcs, could be done better on common CPU by the mid-2000s. Do it on a sound card, you have to buy a new sound card to get improvements. Do it directly in the OS or in-game, and you can benefit from improvements from the OS or library or game devs immediately.


I had a decent gaming machine in 2007-2008, and, in particular I remember that Battlefield 2 sounded a LOT better with a soundcard. The difference was night and day.

In particular, EAX (environmental audio extensions), which was a feature of the X-Fi cards, were definitely EOL'd due to Vista's changes around the DirectSound3D APIs.

https://en.wikipedia.org/wiki/Environmental_Audio_Extensions


My recollection as well, and supported by the chart on this page showing a huge drop in sales from 2001-2003:

https://www.tomshardware.com/reviews/future-3d-graphics,2560...

(It’s not a great chart but shows a general trend.)


> CPUs just got fast enough that all of that, including 3d calcs, could be done better on common CPU by the mid-2000s. Do it on a sound card, you have to buy a new sound card to get improvements. Do it directly in the OS or in-game, and you can benefit from improvements from the OS or library or game devs immediately.

A dedicated chip is often better than a general purpose CPU (hello GPUs?). Game audio made a huge step back with Vista and beyond. Since audio cards could no longer do what they used to because of the limited driver model and developers/studios were more focused on graphical fidelity and physics calculations, nobody was going to waste precious CPU cycles on audio, at least not any more than the bare minimum.


It had nothing to do with DRM.

3D audio on the PC was deliberately killed by Creative.

They sued Aureal into bankruptcy, bought it in the court auction, and the day the sale closed they nuked the support website and took the drivers offline.

They used similar scummy tactics to decapitate any other competitors. Then they considered their reverb based spatial audio solution sufficient, and promptly sat on their heels doing zero innovation while collecting a rent.

And them as chip technology improved, a basic "Soundblaster 64" chip became so cheap that motherboard manufactures started bundling it in as a selling point (which made a ton of sense for non gaming PC users btw). Additionally MS stepped in and provided some software spatial functionality within DirectX, as processors had improved to the point where dedicated hardware for it wasn't necessary.

Back then I worked in gamedev, and I briefly considered going into competition with MILES et all with a 3D audio library after the Aureal fiasco, after I stumbled on some interesting papers doing Fresnel Zone Tracing variations as low overhead spatial audio, but ultimately wasn't serious about it vs other options at the time.


FWIW, nowadays those libraries are quite mature and free to use; with deep integration to game engines available.

Ie, Steam Audio: https://valvesoftware.github.io/steam-audio/


> Back then I worked in gamedev, and I briefly considered going into competition with MILES et all with a 3D audio library after the Aureal fiasco, after I stumbled on some interesting papers doing Fresnel Zone Tracing variations as low overhead spatial audio, but ultimately wasn't serious about it vs other options at the time.

Which papers were they, if you recall?


> or having sound cards full of synthethizers that could create new audio on the fly.

To be fair, realtime synthesis just became obsolete for most purposes once CD quality digitized audio became cheap enough to store (and later, to stream). And for musicians, once CPUs became fast enough, SW synthesis with its limitless possibilities took over from HW synthesis.


Something like (fragment?) shaders for audio would be amazing though. Or maybe just an embedded low power CPU (running realtime). I think there's a lot of room for generative audio still (or various degrees of "rendering audio"), or applying distortions like doppler, various reverbs, or just generating things on the fly via synthesis, you can do things like make each effect unique and have various custom parameters (material pairs, impact velocity, room conditions, etc.).

I think full 3d audio is a different problem though because it, at least, requires a version of the rendering problem (for waves). It's harder in some ways than light rendering (because phase/coherence matters sometimes, the wave equation is harder to solve), but easier in others (no need as much detail as light, wavelength is large), or just plain weird (nonlinear effects from rattling and such).


Generating sound on the GPU via shaders is definitely a thing. There's a bunch on shadertoy that do just that : https://www.shadertoy.com/results?query=&sort=popular&filter...


There’s still sound card with programmable DSP, quite often used to replicate high end effects. They cost a lot - and every plug-in are specialized to a brand. Still quite useful because the quality of those are very high, and don’t impact the recording process.

Or there’s still some “generic” box (I mean that you can program yourself) like the Symbolic Sound Kyma Capybara. They’re quite niche thought, like modular synth.


Sounds kinda like ray tracing and physically based rendering to me


Also, disk space became cheap enough that the game could store audio files (such as MP3s) instead of generating the audio on the fly by the sound card. I remember Age of Empires (released 1997) music were MIDI files and the "instruments" would be changed by the game's code to make the music sound much better. EverQuest (1999) also started with MIDI files but later expansions replaced the music with MP3 files.


Surely this is like the raytracing scenario for GPUs.

There are always slightly harder, and better ways of doing something which the accelerators are better at. Audio acceleration I guess peaked too early or they just couldn't get the tech demos as impressive as graphics.

I remember reading about a demo, which I believe was from Matrox. They managed to get an audio 3D environment working over a pair of stereo headphones which was good enough that you could play Doom headless. Just close your eyes and you could tell where people were.

A lot of what I read about here is that prerecorded samples is good enough, in the same way that raster lighting is good enough and raytracing is a waste of time.


Diminishing returns. But IIRC the thing that killed 3D audio was patent wars, like what happened to force feedback.


I honestly think it is the same as the iGPU.

It is cheap enough that it is bundled with everything. But that doesn't make it better than a discrete GPU/Audio chip.

Intel GPUs are barely good enough to run an OS desktop yet they hold the majority of the market.


Huh? Force feedback and 3d audio both very much still exist.


Sure, that's what I meant by "once CD quality digitized audio became cheap enough to store". Both the fact that once games started to be shipped on CD, they could literally play audio tracks straight from the CD, and the fact that faster CPUs and better codecs made it feasible to ship compressed audio and decode it in realtime.


> The main reason for their death in my opinion, is the DRM-driven (although MS claim it wasn't because of DRM) changes to Windows drivers rules.

Creative drivers were a double digit % of all Windows BSODs. Microsoft gave Creative plenty of time to fix their drivers, creative never did, so sound drivers got booted from the kernel.


The best of competitive sound now is the Sennheiser GSX which is an external USB DAC/Amp. It has a good 7.1 to headphones mode that gets you about the best surround sound on headphones for games/movies you can get, it impacts the tone the least and has one of the best HRTF's I have heard in eyars. But it pales in comparison to the cards we had 20 years ago, I miss my Aureal A3D.


Ditto, I can't say how much of it is pure nostalgia, but I feel like Counter Strike 1.x on my old Turtle Beach Montego II gave better positional sound than any game/hardware does nowadays.


The differences may be all in my head, but I've been very happy with a USB Dragonfly DAC and a pair of quality headphones, along with high/master quality input.


Is it better than Dolby Atmos you think?


In Overwatch it was when I tried that a few years ago. While Atmos gave you some sense of vertical positioning generally it wasn't correct and I struggled with positioning. Theoretically Atmos ought to be miles better, its object based like the sound cards of the early 2000s but in practice they have got something wrong in the headphones implementation and positioning is hard to pick out. Its better than just stereo but the positioning is a lot better on the Sennheiser device.

Whether Dolby Atmos has improved since then or other games have implemented it better I don't know. I feel like we probably need an open source implementation middleware for object sound to headphone/surround speakers to really fix the situation.


> I saw many people defending them claiming it was for better stability, avoiding blue screens and so on.

If you never seen a system BSODing from the sound drivers - I'm glad for you. I've seen enough sound card drivers crashes to tell you what that WAS a problem. Along with network cards, video cards, TV-tuner cards and almost anything what needed a driver.

> After that, the only use of sound cards became what the drivers still allowed you to do, that was mostly play sampled audio, so sound cards became kinda pointless.

Discrete sound cards became pointless because by 2001 almost every consumer motherboard had an AC'97 compatible audio coded on board.

So if you didn't need super extra fidelity 5.1235435 sound system AND didn't want to shell out additional ~$100 (SB Live! in 1999) or $2-300 (SB Audigy 2 in various variants, 2003) - you just could use the onboard one.

> having sound cards full of synthethizers that could create new audio on the fly.

NO THANKS: https://youtu.be/3AZI07_qts8?t=9

And this is a Creative card! I had my share of good synthesized music (because computers couldn't yet do a proper digitized sounds yet), but the tech should had die and it did.


The sound market was always a mess.

Games ended up congealing around the Sound Blaster standard, which put Creative in the centre of the sound universe. Everyone else was always just "SB compatible", which meant they were playing for the "$10 placeholder sound card" market in the Pentium days. By the time we were getting onboard audio (I think my first board with it was Socket A), it was all hidden behind DirectX, and the market became the "90 cent placeholder chip soldered onto the mobo" market, and then they're all undercut by Realtek.

Unfortunately, Creative is a mediocre steward of the premium-sound landscape. Their product matrix is complicated, support is all over the place, and the drivers are sketchy. I have an Audigy RX that I pulled out of circulation because it could crash two different boards (B550 and X570) and the general consensus was just "they're not that compatible."

I suppose that market technically didn't crush everyone else, there was still the pro-audio market, but that had entirely different needs than a typical home enthusiast. If you're building a studio PC to run specific studio software, you can put up with a narrow compatibility list and finickiness.

But it feels like there's a reasonable niche there for the "eager to throw money around" audiophile crowd. Cards full of high-markup capacitors and filters so you can claim to offer cleaner power and a lower noise floor seem tailor-made to unlocking those wallets. Where is that card? Although I suspect for that audience, they just pipe out stuff via optical to an external DAC because the inside of the PC must be full of RFI.


I think part of the issue is that once you get super picky about audio quality - whether from a producer or listener perspective - you also want a physical experience. You want a box with knobs and buttons on it, lots of I/O, wireless capabilities and whatever other features. The classic PC sound card wasn't that; it did make the PC play and record stuff, but it was positioned as a way for consumers to play games and for professionals to record demos(before taking it to a real recording studio). The professional digital recording systems were sold as whole systems, of which a PC could be one part, but always had a proprietary hardware element as well. [0]

For the masses, the high end today is mostly encompassed by a USB headphone DAC. Headphones get you high quality in a small form factor, and a headphone DAC doesn't need a lot of power or I/O. Once you go bigger, again, physical experience takes hold. People want their vinyl collections and so forth in their listening room, and thus where there's demand for digital, it's usually outside of the classic PC form factor too - it could be an iPhone and a Bluetooth speaker, or a dedicated receiver for the home theater setup. Going this route means it can(if built carefully) avoid crashes and updates interrupting the experience.

[0] e.g. early versions of Pro Tools https://www.pro-tools-expert.com/home-page/2018/3/27/a-brief...


The PS5 has an audio chip loosely based on the design of the PS3’s Cell CPU. It’s used to compute HRTF 3D audio. It’s really cool, but it’s basically the only modern example (not sure what Apple is doing for it’s 3D audio) and of course it’s a console so it’s not a separate, user replaceable sound card.

It’s a shame because I would love more audio sources to support HRTF (head related transfer function) and “ray traced” audio.


Games can have HRTF if the developers want to include it, it doesn't take a fancy sound card to make it happen. Counter-Strike Global Offensive had an update a few years ago to implement HRTF, its now labeled as the "3D Audio" option in the game. It works on pretty much any modern sound card.


+1. It may not he HRTF, but the game with by far the best positional audio at the moment appears to be Crytek's Hunt: Showdown. Sounds can be pin pointed with amazing accuracy. Often times, one can shoot blindly through a wall just based on the noise an opponent makes, and score hits.

The game deliberately includes many sound sources to facilitate this such as stepping on various surfaces, glass shards on the ground, and wildlife making noise based on player proximity.

This works amazingly well on regular, on-board PC sound chips, though headphones are quite mandatory.

(Disclaimer: not affiliated, just a fan).


Thank you. Are you able to adjust the positional audio to “better fit” your ears? The PS5 comes with a few presets but unfortunately it feels like my ears are somewhere between two of the presets, so for one sound sources feel lower than they should while the next the feel higher than they should (compared to a reference sound that is ear-level).


That chip is responsible for much more than HRTFs too. It can handle a huge amount of 3D audio-related DSP effects and decoding which are all way more compute intensive than the HRTF, which is performed once at the very end of the signal chain for the headphones.


How would HRTF be 'performed once at the very end of the signal chain'? Dont you have to transform every individual signal/position before mixing? On the other hand I read somewhere ATMOS is encoded as an array of filters with positions, so decoding is merely a fourier decomposition, would love to learn more about that.


There are a few different models at play: surround sound like 5.1, 7.1, ambisonics and the 7.1.4 Atmos static bed; and object-based where mono point source sounds are attached to a location. The former traditional models can be interpreted as individual objects positioned at the speaker locations and folded down to stereo passing through the HRTF that way. It’s a mixed signal so it really is at the end of the chain. For object-based, those are more precisely located but have other downsides (e.g. they break our mixing concepts for things like compression and reverb) and each object would need to be upmixed to binaural stereo through the HRTF.

Higher order ambisonics strikes a pretty good balance in terms of spatial resolution while still being a mixed signal. You can then pair it with objects for specific highlights. Atmos is a 7.1.4 static bed plus dynamic objects, so similar idea. In either case, most of these 3D sound systems support very few dynamic objects. For example, Windows Sonic only supports 15 dynamic objects on Xbox: https://docs.microsoft.com/en-us/windows/win32/coreaudio/spa...


Thank you. Do you know if Sony has publicly released any more technical documentation about it? I know Sony put out that video with Cerny around the time of the PS5’s release, but I don’t know if there has been anything else.


Nothing public I’m aware of, unfortunately. I wish they talked about it publicly in more technical detail.


ok you are SUPER misinformed about the motivation for the driver architecture change.

drivers live in the kernel, and prior to Windows Vista, they had the same rights as the kernel. pre-Windows Vista, a driver could easily be malicious and exfiltrate anything it wanted to anywhere it wanted. the driver architecture change fixed this gaping security hole, while still allowing drivers to exist.

drivers needed to be rewritten to accommodate LARGE changes to how they needed to do their work, and the result of that is that drivers which interacted with hardware directly no longer could, they had to ask the kernel to do stuff, and the kernel could say “no.” imagine being a driver maintainer and needing to react to this change.

this change often required a complete rewrite of the driver. this is why drivers of the era were so feature-limited.

this architecture change allowed kernel-level DRM drivers to become a thing, but DRM would have happened with or without any changes to driver arch, i assure you.

everyone suddenly needing to rewrite their drivers is what caused drivers to appear limited in the new paradigm. it simply took time to reimplement everything that existed in the old driver model, and people wanted working drivers before everything was implemented in the new drivers.


> a driver could easily be malicious and exfiltrate anything it wanted to anywhere it wanted

This is a thing too, but the main problem was what a kernel level drivers not only could, but WOULD crash the kernel (ie cause a BSOD) if something goes wrong. And things gone wrong very, very often.

People like to bitch about changes in NT6+, but I have seen waaay less BSODs (not related to the botched hardware) after that.


thank you, i forgot to mention that.


So funny you mention the whole 3D sound thing because I recall at least two computers I bought in the 90s having demo "games" where you flew a bee or something around in order to hear how 3D it was


Very true for PCs but it’s starting to shift with both consoles and receivers with Atmos decoders. For example, the PS5 has a custom audio DSP chip with 3D sound capabilities for reverberation, spatialization and more.


I think the same applied to 3.5mm audio jacks being removed from smartphones and similar products.


Soundcard-like devices called "audio interfaces"[1] -- now usually USB breakout boxes -- are alive and well in the professional audio segment, targeted at musicians, recording studios, video editing shops, and similar applications.

They're not necessary for consumer apps. Consumer audio applications got "good enough" with mass produced builtin motherboard "soundcard on a chips" that basically replicated the function of the old soundcards at a much lower price point.

If you want to, say, connect 16 microphones at once and record to 16 seperate tracks, or you plan to apply a bunch of digital effects and therefore want a much higher sample rate than what your consumer audio chip can do, you can buy an audio interface.

[1] https://www.sweetwater.com/shop/studio-recording/audio-inter...


Also required for fancy remote work from home microphone setups.

If you want the Shure SM7B you need an audio interface (and probably also a cloud lift or dynamite to bump up the gain).

Lots of streamers, podcasters, youtube people use them.


Actually, just an analog mic preamp (and for the SM7B or other very low output mics: one with super low noise or a cloudlifter in between) or compact mixer will get you up to line level, and then you're good to go with a motherboard sound interface if it has a line input jack.

A better ADC (like in virtually any outboard audio interface) certainly doesn't hurt and would definitely be advisable for musicians, but it's far from necessary for a youtuber / home office use case.


I like the way you say "just" :)

My experience is that a high-end device like MixPre II is fine with a dynamic mic. Focusrite or Tascam US needs a cloudlifter to get enough (clean) gain. Random stuff like BlueYeti is hopeless with crackling noise.

So maybe you don't need a great ADC but you have to choose your audio interface very carefully (for its preamp.)


Yes, the preamp quality is more crucial than the ADC quality when it comes to using mics with very low output, unless using a cloudlifter which shims away this issue. Way more likely to be dissatisfied due to preamp noise than due to a cheap ADC.

Typical interfaces have a fine ADC and some kind of pre, but the pre might be noisier than desired when paired with an unusually low-output microphone, so if you already have an ADC via motherboard line input then you "just" need a pre. I don't mean to lump all dynamic mics into this problematic low-output category (SM58 is much higher output than SM7B, both are dynamic) but we could certainly say that condenser mics should never be in this category when a few inches from someone's face.


It sounds like you know more about this stuff than I do - I’ve spent hours searching online but it’s hard to find good info.

Why are mics like the SM7b better than mics like the MV7 or elgato one that just work over usb? I don’t understand how they actually work or why all the extra equipment (audio interface, gain booster, cables) is worth it - what’s the tradeoff being made when you use USB?


All mics sound different. Whether or not the mic includes a USB interface doesn't inherently make things worse. It just happens to be that the best sounding mics rarely come with an integrated USB interface. No need to use the best if you're happy with something else.

It's like asking what makes a VM better for a website when I could just use a SaaS site builder. Use whatever works; depending on your goals you might end up with the exact same finished product.


The SM7B is just trendy because a lot of famous podcasters use one. For just recording dialog, or doing video calls something simpler would be just fine.

Getting super high end with microphones is only really applicable in the music industry IMO.

Fun fact: Michael Jackson recorded the vocals for Thriller on an SM7B.


Another use is rendering virtual instruments with low latency, so you can play with a keyboard and render sounds live. Built in sound chips usually have unacceptable levels of latency, even with custom drivers.


It's been a while since I broke out & tuned JACK latency, but i think 2 samples of 3ms worked fine on most devices I'd tried, even without basics like a preemptible kernel, on boring old ancient intel chipsets. I'd expect most apps have no trouble getting under 16ms on basically any x86 hardware.

I'd be shocked to find that the majority of these aftermarket devices do at all better. Many are usb, which, even if you do have a fancy isosynchronous device, I'd still expect to be significantly slower than in chipset or pcie.

I could be totally off¡ But I dont think there's really specific chipset capabilities that make a bug difference here (other than iso, which only helps to counter the downgrade of using usb). Windows has ASIO for low latency, but is that a hardware capability thing? I think it's just drivers, that modern tech like pipewire has many of the benefits for free about more closely mapping hardware resources to where apps can use it. I thought ASIO is mostly a product segmentation thing. It'd be neat to take a software virtual adapter like ASIO4ALL & see what kind of buffering really is required there, see what latency that brings consumer gear down to.

I do also remember Android fighting to get their latency down to reasonable levels, which counter to my point suggests latency in general is somewhat hard. There are much fewer system resources there, not missing a tick is harder to insure, but iirc the bigger issue was just that modems & the regular audio subsystems just had really funky audio driver paths that's been slapped together for a really long time, & some modernjzatuon was drsperately needed. This was like... 7+ years ago maybe?


> Windows has ASIO for low latency, but is that a hardware capability thing

Yeah there's replacement ASIO drivers for the onboard sound such as ASIO4ALL or FLEXASIO. But I can agree with TremendousJudge that I've never gotten reasonable low latency with them, no matter my buffer size.

CoreAudio on mac works just fine however.


I can only speak for my own experience, but the cheapest Focusrite outperformed my builtin soundcard with ASIO4ALL drivers both in latency and quality. On the builtin, getting the latency lower than 12ms resulted in noticeable glitches.


I came here to say this - USB sound cards are the kind of thing you can lease or rent out from music shops these days, which is pretty cool if you just need to DJ your cousin's wedding or something for a day on your laptop.


Another use case is decent vinyl rips.


Digitally mastered audio laboriously printed to analog vinyl so it can be shipped to you and then ripped into digital format (and probably listened to on EarPods). Delightful.


I get the point you're making, but you're assuming a great deal.

Vinyl records came out in the late 1940's. The first digital mastering was 1979 -- and it was quite a while before that became the default. So there's a huge window of exclusively analog mastering.

And in both cases the likelihood that a random consumer would have access to the original (digital) master is negligible.


Plus the decades of shellac before that…


So you're saying that, if a recording is digitally master, that negates any reason to print it to a vinyl? If so, I would be interested in hearing more (not very familiar with this stuff).

Besides that, I would love to have access to some music, but it was only released in vinyl and it's too expensive for me. So if someone with the vinyl could do a decent rip and upload it somewhere, I'd love that. That's the point I'm trying to make.


People who spin music on discs professionally could A: travel with their expensive vinyls in a case to whatever locations there may be, including when they get gigs in places like festivals in the jungle where the heat will melt those expensive things or B: Be happy this is a solution :)

Back in the 90's in places like Goa the solution was using stuff like Sony's DAT format, or even minidiscs.


Some people prefer spinning plain vinyl rather than using a DVS + control plate.


Pure digital audio and huge hard drives killed them. Back when we were chasing better and better synthesis - FM, wavetable... because playing back CD quality digital audio wasn't possible - you were lucky to have 23kHz mono, or nothing at all, your hard drive was tiny and MP3 wasn't a thing yet... Every sound card upgrade was literally music to your ears.

Now every computer has a little chip that plays back at least CD quality audio from an infinite pool of storage and RAM. Nobody wants to hear MIDI in their games anymore. I'm not even sure what a better sound card could even do for me - reduce line noise, drive high impedance headphones or something. Boring!


I had a girlfriend who was a musician and back in the late 90s, home recording was rare (but starting to become a thing). We found that the cheapest sound card from a music store had lower noise and better signal-to-noise ratio than even the best (or most expensive) sound card available at computer stores.

For games today, I use the audio interface that came with the computer. For dealing with my synthesizers, I use a Scarlett by Focusrite [0].

[0] - https://focusrite.com/en/scarlett


Yeah, back in that era there were products from Ensoniq and such that weren't popular with PC gamers, but were totally solid hardware for music.


I worked at a CompUSA in high school, and remember a specific white box sound card was frequently sought after by musicians. IIRC it was $15-20

Sometimes they would ask to open the box to see the chips on the board.

I was all about 3D acceleration and didn’t have the money or interest to do anything with audio so I never learned much more about it.


I guess for a few years there might have been a market for hardware MP3 decoders. I faintly remember back when playing MP3s did take a sizeable chunk of CPU time. But after a Moore cycle or two the cost was negligible, and today Spotify probably spends much more CPU on drawing its interface than decoding the audio stream…


Not really on PCs, IMHO. With well optimized software, you could decode in realtime on an i486@100Mhz, a pentium 75 could use winamp and mIRC simultaneously, IIRC. If you didn't have enough CPU to decode mp3, it probably made more sense to upgrade that rather than a mp3 decoder card (if they existed). Mpeg-1/2 (video) decoder cards were more necessary.


Thanks for the flashback. I too remember when playing an MP3 took a good chunk of processor utilization, maybe 40%, and doing too many other things at the same time would cause it to stutter. I also remember when 1080p and 4k were taxing respectively.


And anything I can think of that I'd want a sound card to do these days is being handled by GPUs - like that NVidia AI noise cancellation, where your kids can scream 5 feet away and no one can hear them on Zoom.


The only computer i've ever had with what I consider an acceptable level of line noise was a macbook pro

even a $15 USB-to-dual-3.5mm adapter sounded significantly better on other machines

I experienced this annoying line noise on ASUS, MSI and Dell motherboards


Audio production, I would claim, is the only niche the Macintosh has continually dominated since the first Steve Jobs era.

Even desktop publishing took a turn towards Windows in the mid-late 90s, whether it stayed 'dominant' is a fuzzy question but it was clearly losing market share.

But you won't find a professional music studio without a Mac, this has been true since the late 80s. This is not to say it can't be done with other equipment, just that as a matter of practice it isn't.


I've heard that having your analog audio lines in the same box as a bunch of other high frequency power circuits is usually a recipe for line noise, which is why external DACs can typically provide a better experience.

I have a Sound Blaster something-or-other in my PC but it's not connected to anything these days. I use a tiny Apple headphone dongle that was ~$10, has 0 noise and sounds great.


My friend had a great sound card while I didn't. I remember being jealous of how good midis from vgmusic.com sounded on his machine compared to mine. Night and day.


I used to call the Sierra 800 number and press a key to speak to a representative, just because the hold music consisted of selections of their in-game music played on the Roland MT-32. It sounded amazing compared to my Sound Blaster.


HI HN, long time reader - first post I felt super compelled to respond to.

This is a massive bugaboo in the audio industry in my opinion.

I have always been a PCI soundcard user - and still am to this day, but industry trends are stopping this. I think a big part of this is due to laptops/ipads and the like becoming more popular devices, as well as from a useability standpoint - companies optimise for succesful adoption into a users system - than technical specifications.

I started my DAW with a Terratec soundcard with midi + stereo audio ins and outs roughly 20 years ago.

Fastforward 7 years - i bought an early USB interface - the NI Audio Kontrol 1 to use with a laptop. I could run everything on it - take it out and about - cool!!

Fastworward another few years - and i got more serious about audio and bought a Lynx PCIe AES card (now without midi) - to use with an Apogee Rosetta 800 (8 in / 8 out). Now we're getting there. But - not an all in one solution.

In 2022 - surprisingly - the only (?) companies doing full PCIe audio solutions are Lynx and RME. In a fresh session in FL Studio or Ableton - with a sample buffer rate of 64 (the lowest) i enjoy latency of 0.72m/s. This cant be beaten by USB. However - that's not a deal breaker for most people sadly.

It greatly saddens me that audio in general is a second class citizen with regards to tech advancement. It still blows my mind that the Atari STE with MIDI in built onto its circuit still thwarts the tightness in the midi department - of a brand new full specced blazing machine. We need more development for Realtime O/S in the midi world.


0.72m/s? I hope you mean 0.72 milliseconds, which sounds about right for 44.1khz and 64 samples.

Anyway, have you actually measured the _true_ latency from when your computer thinks it's sending out a signal and when it comes out of the speakers, and comparing it between a (good) USB interface and your preferred PCIe solution? After all, I have an old Focusrite Scarlett 2i2 gen1 and I too can technically crank the buffer size down to 64 in FL Studio and post about it on the internet...


i cant get 0.72ms from my laptop for example. and havent tested it in any official way - i cannot/have not ever got this type of performance on any other solutions before this.

ive not tried thunderbolt before - indeed may be a good stand in for PCI as the comments mention below


I did my math again, and a latency (buffer size in milliseconds) of 0.72ms (0.072 s) with a buffer size of 64 samples means your sampling rate would have to be about 400 KHz (stereo). Are you sure it's not 7.2 milliseconds (to bring it to the usual range of 40 KHz sampling rate)?


Sorry :) seeing the last comment on this thread reminded me the lowest buffer size i have available is 32 samples! apologies for the confusion


I moved over to Thunderbolt plus Audio over Ethernet. This is where the growth is (Dante etc) in the high end. I think I have 60 channels of IO and thinking of adding another 40.

USB is good enough for a lot of people, though I am not a fan so that covers the average prosumer.

MIDI is a totally different subject, but I can run MIDI clock from audio which is as tight as it gets these days. See USAMO from Expert Sleepers.


USAMO doesnt help with something like finger drumming/keys - i wish there was a reversed version of it, i believe it was talked about at some point - converting output into audio - decoded in the pc - for tighter timing.


Thunderbolt is the way, truth and light - with m1. I have a presonus quantum 2626 connected to my Mac m1pro over thunderbolt and can work at 32 sample buffer size in ableton.


The onboard sound chips became good enough, and for those for whom they weren't good enough the noise reduction bonus of an external DAC was worth it anyway. Computers are generally bad for analog signals within a few inches of the case.

I think another factor is MP3 players and phone audio; people stopped using their computer as the (interface to) media source when other things took that function over for them.


There was a point of "good enough" and then not long after that (2005ish?) essentially all motherboards started including onboard audio. I would imagine the market for separate audio cards shrunk below sustainable at that point. Most everything else in the case (GPUs, RAM) are headed there too, it's just a matter of incremental development on the existing trajectory.


There are lots of complex reasons people are postulating here, but I think it's pretty simple. The CPU can do good enough audio rendering without hardware acceleration. The CPU can't do good enough graphics rendering without hardware acceleration. So video accelerators stayed, and sound accelerators died.

Add-on sound devices still exist, but they are simple because they don't include extensive hardware acceleration anything like what a GPU has. In fact, if you want hardware acceleration for audio processing algorithms today, like really fancy 3D sound propagation or something, GPUs would actually be great at that, and they support digital audio output too.


Yes, I think this is about right. I see a lot of threads focusing on APIs and ignoring another thing that happened right around the late 1990s to early 2000s: MMX introduced SIMD to the PC platform. Suddenly real-time DSP algorithms were feasible for playback and synthesis on the host CPU instead of requiring a peripheral with an embedded coprocessor. This allowed soundcards to be refactored as "just" hardware IO channels, with other signal processing effects happening in the application.

At roughly the same time, there were more peripheral buses like USB and Firewire being introduced, which meant that an add-on peripheral did not need to be an internal ISA/PCI card in order to have sufficient bandwidth for rich audio streams. These external devices could also be built with lower noise/interference compared to the boards inside a computer.

And of course, silicon integration always increased so that the bundled onboard IO chip became good enough for many users. So, add-on peripherals had to move up market or into niche settings. That is a bit like how the iGPU in Intel CPUs got rid of the market for basic VGA/XGA/etc. graphics cards for office machines.


That's definitely missing some reasons. Back in the day onboard audio also was just terrible. Like it hissed at all volumes & other similar just bad dacs and bad amps and bad shielding problems. Which soundcards mostly all fixed.

These days basic "audio correctness" is readily available from onboard audio, though. Motherboards have gotten much, much better at noise isolating the audio area, and dacs & amps have generally improved.


This coupled with noise. For those who don't care too much about noise, an onboard cheap DAC is perfectly fine.

For those that care about noise, you don't want the analog audio anywhere inside the case since it's a horribly noisy place. So you get an external DAC.


A couple of reasons:

- During the MS-DOS era, there wasn't really a standard API for sound, so using a cheap, off-brand sound chip (including anything that might be integrated) often meant compatibility problems. Even though it might not necessarily have offered the highest quality sound, Creative's Sound Blaster line was the gold standard for compatibility during this time. Standardized sound APIs have largely eliminated this issue.

-Throughout the '90s, music for games (and a number of other applications) was distributed as MIDI (or MIDI-like) instructions to be generated by a synthesizer, and the quality of the music was very much dependent on the synthesizer used. The Roland Sound Canvas series was the gold standard at the time (in part due to its quality, and in part because that's what the composers themselves used), but it was very expensive and out of reach to the mass market. Software synthesizers were either too slow, or the quality sucked. That gave an opportunity for sound card manufacturers like Creative to offer higher-quality hardware synthesizers on their sound cards than what cheap/integrated cards could do. These days, most audio is PCM, and hardware is perfectly capable of high-quality software sound synthesis, so hardware synthesis has become a non-issue and modern consumer sound hardware doesn't even have hardware synthesis capabilities anymore.

- During the '00s, sound cards began to offer accelerated environmental and positional audio (e.g., Aureal3D, Creative EAX), which games quickly adopted to improve the sense of immersion. However, changes in the Windows audio architecture introduced with Windows Vista broke this functionality without a replacement. Advances in CPU hardware have since allowed this type of processing to be done on the CPU (e.g., XAudio 3D, OpenAL Soft) with acceptable performance.

In the current era, we do have dedicated soundcards, although not in the form of PCIe add-in boards. External DACs (either dedicated USB, or integrated into a display or AV receiver) are popular, as are the DACs used by wireless/USB headphones. Also, there has been some work done to utilize the computational capability of GPUs for real-time audio ray tracing.


On-Board was good enough and cheap enough. It was as simple as that. A lot of the Audio processing moved to the CPU. Dedicated Sound Processor Effect requires Gaming support.

There was also Aureal. Both Creative and Aureal had their own specific API to try and create a similar moat like Glide from 3DFx but failed. And then Realtek took over.

Creative could have competed with onboard Audio as well. But they were too worry about losing their Sound Blaster Revenue, so they somehow diverged into other things like GPU ( 3DLabs ), MP3 players, Speakers, etc etc. And every single one of them failed.

If you are looking for modern Audio Engineering, you could look at PS5. But powerful DSP isn't exactly rocket science anymore. A lot of the improvement has to do with software.

Creative used to be the pride of Singapore. It is sad the company was badly managed and never made the leap to the next stage.


AC97 (1997) was the first blow - this was Intel's improvement on the defacto SB16 interface (and not compatible with it) and was around the time audio started being integrated into motherboards.

This is also around the time it started to be common for pre-built systems to integrate functionality into the motherboard, such as VGA, audio, USB, and in some cases even AGP video all as part of a chipset.

The peak of PC audio probably matches the peak of the "HTPC" wave that happened in the first half of the 2000's - PCs designed to be put under your TV and replace your stereo.

But also, laptops started getting cheaper and more popular as the late 90's turned into the 2000's and beyond - where integration of components was even more valued. Then smartphones started to take over in the 2010's.

The culture is different now. These days, the young people don't have stereos anymore, they might have at best have a TV soundbar or some really good wireless speakers, or a couple of bluetooth speakers, and the phone is the centerpiece of the personal audio experience now.

Hi-Fi that's not dedicated to making your car rattle or be \blasted at 500w-per-channel volume over a bar/club PA speaker is dead.

Desktop PCs are for businesses which need only good enough audio for business purposes, and gamers who probably want to spend money on a GPU over audio.


Intel came with AC'97 as a "good enough" onboard solution for audio, with standard drivers and all mainstream capabilities. No MIDI-port, no fancy spatial audio, just good-enough stereo out and mic/line-in.

It forced the dedicated soundcard vendors to justify the add-on price by pushing features like multichannel, Surround sound codecs, hardware controls etc, but none of those features were of mainstream interest.

Total sales volume for dedicated soundcards dropped, economics of scale dropped, prices had to increase, pushing the products even more into niche...


This is correct but there's one other part—most of the cards used to have built in midi synthesizers, but those became more or less obsolete when storage got past a certain point. Games on CD-ROMs could just ship redbook quality audio and that's infinitely more flexible than canned standard midi sounds. CPUs got fast enough so that mixing multiple audio channels and even running effects on them wasn't taxing enough to warrant any sort of hardware acceleration, and so the AC'97 standard of just a plain stereo DAC is really all anyone ever needed at that point.


Yeah, that's part of the "good enough" onboard audio.

But I doubt that gaming was the key driver at that point, it surely was the demand for work PCs to play the new .wav soundeffects of Win95, which led to mediocre onboard audio with Software MIDI-synths. I vividly remember VIA onboard Audio everywhere, with gamers still putting Soundblasters in their PCs (fighting the BIOS to free up the IRQ and DMA ports) for the better quality and games still being developed for "Soundblaster 16".

Good times :)


> no fancy spatial audio

That's disappointing. Where can we go if we want spatial audio?

It feels like spatial audio would be a huge boon to games instead of calculating and modeling sound directionality on the CPU.


Spatial Audio came back later (late 90s) in a standardized way with Microsoft DirectSound.

Before that, most spatial audio was either some very specific codec of the soundcard vendor a game had to explicitly support, or some psychoacoustic post-processing which added little benefit and was mostly there for the wow-effect in bundled demo apps...

I vaguely remember CreativeLabs pushing a custom 3D Sound API with "EAX" to justify their dedicated HW, with dedicated logo and everything, and several games supporting it.

It worked well for a while but gaming was a much smaller market back then so it probably wasn't sustainable to target a niche of an already small market...

It's the typical story of an industry where providing the mainstream needs funded development for providing the exceptional, and then someone came along to undercut by providing only the mainstream needs


> That's disappointing. Where can we go if we want spatial audio?

You could build a Dolby Atmos PC setup for those handful of games which support it (history repeating).

Or you get yourself a PS5...


USB killed it. Keep your signal digital until you hit the speakers (or a short audio cable). No interference, 44.1kHz from end-to-end (or more).

If you don't like the DAC in the headphones, you can also find a high-quality USB DAC and use the audio cable from there.


> you can also find a high-quality USB DAC

For anyone curious, check out Schiit Audio. I have the Magnius/Modius paired with Sennheiser HD660s headphones and I couldn't be happier. JDS Labs Atom is also a good choice.


> No interference

This has not been my experience at all. For my 5" powered studio monitors, the _only_ way to get a interference-free signal from my desktop computer was with an optical cable to an external DAC.


As others replied, you may have grounding problems, that is, either the lack of grounding or too much of it (ground loop). An effective way to solve the problem is to isolate the signal by putting an audio transformer in between the outputs and the amplifier (or amplified speakers), one for each output. I've done this for desktops and laptops and it brought the noises to zero. Just make sure the transformers are of decent quality and suitable for audio.


That's probably a grounding problem, and was probably just coincidentally broken since optical cables don't carry electrical signals, and hence can't be a part of ground loops.


Can't go wrong with optical cables for this reason - they're foolproof against interference. That's why I'll always use them when possible.


You're giving me flashbacks to the stupid amount of time it took for me to identify a coil whine issue on my motherboard that gets worse when the CPU is in power saving mode.

I 'worked around' the issue for the longest time by leaving covid based processing going on folding@home, which would force my CPU into turbo mode.

Eventually I found out that if I disabled cstates, it mostly went away.


Hot glue on the coil should shut it up


Can you please tell me what optical cable you used? Struggled with this issue for years and if an optical cable somewhere in the chain can fix my problem i'd love to buy one.


there's basically only one kind of optical cable used in consumer audio (S/PDIF+TOSLINK, or however you want to refer to it). the trick is to arranging for both ends to use it.


GP probably has a sound card with optical output and speakers with optical input. This is usually called Toslink or SP/DIF (the latter may refer to a coax link - wired so won't help with ground loops).

The fiber is standard and you don't need any fancy Monster fiber costing $100 a foot.


You may have a ground loop and that needs to be fixed


USB

If you're serious about audio you just plug a cable into your breakout box and have your interfaces. converts, and preamps there. Your sound hardware can be anything from a pure i/o to an elaborate instrument under computer control. You can do audio synthesis and compositing on the CPU, GPU (not so different from a DSP) or external hardware.

Soundcards are only 'gone' in the sense that PCI cards are less important because many people use laptops and the audio built into motherboards is more than Good Enough for everyday purposes.


Probably going to sound pedantic, but a breakout box is different from an external audio interface or convertor. Breakout box typically only converts from one socket to another socket, like this one: https://www.tascam.eu/en/bo-16dxout

External audio interface (sometimes wrongly called a sound card): https://www.rme-audio.de/fireface-ufx.html

Convertor: https://www.rme-audio.de/m-32-m-16-ad.html

Actually a sound card: https://www.rme-audio.de/hdspe-aio-pro.html which can utilise a breakout cable or external convertor


At least in terms of gaming I think multi-core CPUs killed them. A big argument for sound cards used to be that they'd give you higher quality audio and use less CPU. I can remember benchmarks from the mid 2000s showing less CPU usage with a dedicated sound card vs onboard sound. But by the time you get to the early 2010s anyone building a mid to high end gaming PC was using a quad core CPU, and with 4+ cores it's really hard to care if onboard sound is using a few percent extra CPU.

And while 3d audio is/was a cool concept, most people don't really have a sound system that will really take advantage of it. Even most "serious gamers" that I know use headphones or stereo speakers... now that I think about it I'm pretty sure I'm the only person in my friend group with a 5.1 speaker setup on my PC.


Creative still makes very high quality discrete soundcards, they just suffered a combination of on-board audio being enough for the average and wireless headphones not needing a DAC. Also my father's generation was super into audio and had lots of disposable income even for middle class people to invest in audiophile gear, today that income bracket is very very gone, so the markets are much smaller for gadgetheads to spend money on stuff like that.


USB-based external DAC's took over. They very good now a days. Head over to audiosciencereview.com to see lots of DAC reviews, technical info and discussions.


Better than very good. I've just bought a Topping D90SE for my studio. It kills professional studio DACs from ten years ago at a fraction of the cost.


They aren't relevant anymore for the same reason that USB cards, serial cards, network cards and SATA cards aren't relevant anymore. They still have their niche, and in commercial settings you'll still see HBAs (with SATA but mainly SAS), 10G+ network cards, and audio interfaces, but but for most of the personal computers the level of 'better than good enough' was already reached a while ago.

There is no point to using an add-in card if the facility is now on the main board and can do the task to the user's wishes.

The same thing can be said about many previously modular components where more and more is now simply a function of the main board itself. Take all the legacy I/O which used to be various chips, often on various add-in boards. They were all condensed into one single Super IO chip that can do all of it, but at a fraction of the size, cost and energy usage.

A lot of peripherals used to be implemented in separate chips and sometimes even discrete logic. If we were to try to do that today we'd either have to cut 90% of the currently available standard features or make the mainboard ten times as big to be able to implement it the old school way.


Who needs better audio still looks for dedicated sound cards, *but* external.

For example if you produce music, you probably are good with an external USB audio interface like a Focusrite Scarlett.


This is exactly correct. For most folks, whatever sound the integrated card makes is enough. Until you are producing music, that is - and then, the external has easy connections for the equipment.


Thanks for the recommendation. I've been needing better audio quality from my machine than it can produce, but it's actually pretty hard to figure out how to accomplish that if you don't already know.

Been through three different external audio devices, and they've all sucked. I'll try yours.


I have MOTU, Universal Audio and Audient which are all good. But they are general aimed at musicians/producers etc. I would also recommend RME if you use a Mac or just the ASIO drivers on Windows.

The drivers are really important, and often audio interfaces aimed at professionals will use a driver that a DAW can utilise with control over latency. macOS users tend not to need to worry about this so much because as long as they are good it works, and on DAWS on Windows I use ASIO because MME, WASPI and WDM have had poor behaviour and/or latency in the past. But for general music listening those other driver types should be ok if the manufacturer supports them well - but this is where the problem might be.

I am guessing it might be the driver that sucks in your experience?

Have you tried S/PDIF output to an external convertor?


There are many other respectful audio interface makers, not only Focusrite Scarlett line: MOTU, Arturia, SSL, M-Audio, Native Instruments, Behringer, Steinberg, Presonus…

I think in practice all of them are good enough but if you’re into numbers, dB, Hz, and measurements, check out Julian Krause audio interface reviews on YouTube.

https://youtube.com/playlist?list=PLv875tu-z7M4EyBeuofJ1Tehq...


I have an external soundcard but don't use it as the DAC in my laptop is now good enough that I don't hear the difference. Either the laptops have better chips or my ears got worse


Even then people are buying an external audio interface as much or more for latency improvements as opposed to sound improvement.


And for connectivity. Even for the small band I was in, doing "home" recording we had ~10 microphones and 2-3 direct line-ins set up recording simultaneously trough an external interface and out to 5 headphones or two different sets of speakers.


Yes, the latency improvement is very true


Two big things that I think contribute to this:

1) Most people are happy with good enough. To most people's ears, speaker quality makes a bigger difference than audio output, and people already settle there. Furthermore, when iTunes was a big deal it turned out people got accustomed to low bit rates and mediocre equipment and thought it sounded better than the good stuff, because it's how they expected their music to sound.

2) With most computing moving to laptops and then to mobile, people generally don't have a choice about the audio processing technology inside their computer.


Beats probably helped with the first one as well. That was 2006 that they started to teach people that muffled sounds with exaggerated bass is what it should sound like.


Former MT-32 and CM-64 owner, here!

For what it's worth, if you have a small discretionary budget, I would recommend a "top of the low-end" DAC to anyone who listens to a lot of music. I "did my own research" and concluded that for me, the Topping D10s USB DAC was the correct amount of gadget. It has RCA outputs and supports 384kHz audio, which needs to be enabled in your sound settings.

When I got it set up, it was as if my previously disappointing desk stereo speakers and preamp combo took a great sigh of relief and the sound opened up. Everything sounds more defined, I can hear where the instruments are positioned in the soundstage, and I am now one of those people who appreciates Bandcamp allowing FLAC downloads. For me, this was worth CAD$139.

https://www.amazon.ca/gp/product/B08CVBKHFX/ <- currently "unavailable" but likely easy to locate via search


I appreciate the offer, but I'm not looking for any products. Rather, I'm interested in the historical events that led to the present circumstances.

Also, I'm certainly a fan of your romplers. Now if only I had the discretionary budget to find a couple of those in working order.


Onboard DACs are good enough for HiFi, so there's less need for specialized sound cards today, except for making music when one needs ultra low latency (built in audio is getting a lot better also at this however) or multi track recording. I'll be buying shortly a Tascam US-16x08; it doesn't offer much more in terms of sound fidelity than my older Steinberg CI1, but it can record 16 channels at the same time, which is handy when miking a complete set of instruments (drumset mikes + overheads, keyboards, guitars & bass amps, voice, etc... you never have enough inputs) so I can easily keep track during rehearsals and have more freedom during recordings.


I contend that built-in is not very good for HiFi; my Behringer audio interface is visibly better in audio quality on my headphones because its DAC is better.


Have you seen the DACs they're integrating into the higher end motherboards these days? They're using the same ESS DACs you'll find on high end audiophile grade equipment.


I used to love MIDI's and was always excited to run my whole collection through any new sound card. Of course, the best option was to run MIDI out to my Yamaha keyboard and play everything through that.

It's still fun today, 25-30 years later, to crack open in a 90's MIDI in a DAW and route the channel outs through virtual instruments to see how they sound.


They're still around, I use one because the sound chip on my motherboard somehow failed. I'm allowed to use a subwoofer and 2 sets of speakers I got at goodwill for like 20 bucks. These speakers I use were like 200 dollar speakers back in the 90s so despite them being old they're still good and still work fine. Soundblaster mainly makes the soundcards, granted mines old from 2013 but it works fine on a modern PC. I just checked the site and they made a 2022 version so there's obviously STILL a demand for them, probably because it's hard to get an inexpensive subwoofer unless you buy second hand and it's actually cheaper to buy a separate sound card for 45 bucks and buy used speakers than buy a modern bluetooth or USB one. I saw other more expensive options but went with the least expensive because I needed sound and believe me it's better than the soundbar I used before for music. If you're an audiophile who wants to use your computer as a stereo with older speaker setups? They're still useful. Most prefer headphones now but personally I actually like using my setup even if I was kinda forced into it by a sound chip failure.


I think all the innovation moved to the creator/professional stuff. You can buy amazing sound gear that interfaces with the computer, controls stuff in Ableton, etc. For me at least, I've chosen to go with studio monitor type speakers from the professional audio world rather than home hi fi stuff. I suspect others are similar, a small 4 channel mixer with dac and other cool interface stuff is common.


I've tested USB audio adaptors, ranging from cheapies from Amazon, to a PreSonus recording interface. In fact, without resorting to really fancy measurement gear, an audio adaptor is close to the only thing good enough to test an audio adaptor. I've also tested the audio output of phones, built-in PC audio jack, etc.

I use PC audio to test analog audio circuits that I make, so I've made it my business to know the quality of my measurements.

I've also checked out the specs on the chips used in those devices. The delta sigma conversion technique is one of the wonders of the modern world.

The fact is that the audio quality coming out of those devices is stunning, and probably doesn't need to be better. I can see where a recording studio might want to spend more on "overkill" to make the artifacts of their digital interface a non-issue, but for the rest of us, we're living in a golden age of audio.


> on-board sound became good enough

You answered it yourself there. Sound hardware started being integrated into the motherboard and/or southbridge/PCH.

(Although a minor quibble... on-board sound *started existing*. Back in the days of sound cards the only on-board sound would've been a PC Speaker which... well, it can do beeps of various frequencies, but that's about it.)

Old sound cards also had various synthesis and MIDI stuff, instead of playing sampled audio, which is great in theory but... then your audio sounds different on every different hardware. Also, these days CPUs are fast enough to do a lot of the synthesis in software (and have extra cores so you're not stealing cycles from something else). That way, even if you really wanted synthesis, not only do you not need extra hardware, it also sounds the same "everywhere".


I wanna add the death of the synthesizer. There was no way to ship a game with a soundtrack without a hardware synthesizer, due to storage and CPU power. Hardware synthesizers could differ in quality. Listen to what this musician did with under 50 kb using the SNES hardware synthesizes: https://youtu.be/gkCcvoJ09gU

In the 90s, CD audio could handle some soundtrack work but without looping, and it would obviously block disc reads.

However you have now significant storage and hardware compressed audio decoders. So, soundtracks are shipped as compressed waveforms.

All decoders will play them the same way. There is no differentiation.


As an aside, the hardware/driver support for sound cards is terrible on modern Windows.

There's this trusty Creative Audigy Rx that I nabbed from a closing down sale at an electronics retailer. Poetically, both are facing the same fate.

I built a Ryzen 7 machine and installed Windows 10. Whilst installing the CD drivers for the soundcard Windows 10 BSOD'd and rebooted. Not to be deterred, I tried the latest downloadable drivers (marked as Windows 8 compatible) but it is all WDM so surely OK? Not so. Another BSOD.

There was no help to be found online so I very reluctantly gave up. Now it lives in the retail packaging somewhere in my house, the bright and elaborate box promising an audible experience that exists only in my mind.


Might want to try a modded driver: https://danielkawakami.blogspot.com/2020/08/sb-audigy-series...

I gave up on official drivers years ago.


years ago I purchased a Create Sound Blaster ZxR.

Windows _never_ worked well with it. A reboot may or may not have my sound card undetected due to some bug, I think it was related to their PlugNPlay, but I don't recall.

Until they completely dropped support for it, so then I just flat couldn't use it. And Linux support for it never really came around (maybe now? no idea).

So my policy now is fuck Creative. I purchased good money for a top-of-the-line sound card that never worked well and rather than fixing it they dropped support for the very next version of Windows.

I just use my motherboard audio because I don't have to deal with that shit and if I do ever purchase another separate soundcard it sure as shit won't be Creative.


Creative, that is a name I haven't heard in a while. I'm old enough to remember limos pulling up to the (to be bought by Creative) EMU office in Scotts Valley by the likes of Trent Reznor to play with the latest sampler. So much gear envy back then. It's so cool to live in 'my' future where I can get a used Pigments soft synth for $75 and Ableton lite for $15 on a niche message board of people from around the world and buy from someone anywhere in the world, pay digitally, and in less than an hour have as much music production as $50,000+ would have bought me in the 90s. It's so crazy to live in what to me is the future.


There's still a lot of dedicated audio interfaces for people who are involved in making music. However, most of those people are using Macs because of how better CoreAudio compared to ASIO or anything else, and subsequently, because most of software is developed and tested first and foremost for Macs thanks to network effects. And most mac users have Macbooks, which means that majority of usually used audio interfaces are external ones.

As for people who just listen to music, IMO, built-in audio interfaces or just digital Bluetooth headphones have been good enough for a very long-time in digital-audio conversion.


They're alive and well: https://www.behringer.com/product.html?modelCode=P0BK1

This is the unit I have and I'm quite happy with it. It's on the "low end" of "good". It has a surprisingly low noise floor. Lacks sophisticated routing and switching. Latency is really well at 192khz, good enough for live mixing, as low as you have the CPU mmmph to handle it.

They only take off in price from here, up to thousands of $.


The included audio interfaces got to a point where they're good enough for the conventional user. But that aside, Having an external interfaces, like we do now provides better and more modular options. Better latency, quality, DAC converters, additional inputs/outputs, etc. Focusrite, M-audio, Presonus, Motu, Behringer, Unversal Audio, the list goes on.


They're still around, I use one because the sound chip on my motherboard somehow failed. I'm allowed to use a subwoofer and 2 sets of speakers I got at goodwill for like 20 bucks. These speakers I use were like 200 dollar speakers back in the 90s so despite them being old they're still good and still work fine. Soundblaster mainly makes the soundcards, granted mines old from 2013 but it works fine on a modern PC. I just checked the site and they made a 2022 version so there's obviously STILL a demand for them, probably because it's hard to get an inexpensive subwoofer unless you buy second hand and it's actually cheaper to buy a separate sound card for 45 bucks and buy used speakers than buy a modern bluetooth or USB one. I saw other more expensive options but went with the least expensive because I needed sound and believe me it's better than the soundbar I used before for music.


There are a lot of mentions how onboard audio killed sound cards and that there are usb audio adapters.

This is right till you want something a little bit out of the ordinary: usb dac with 5.1 that supports hardware decoding of DSD/etc . Most of the options have either $x000 price tag. The next best thing is to use A/V Receiver via HDMI


The answer is simple: Who benefits from a soundcard?

For your average consumer, built-in audio is plenty good. Not much reason to get any extra hardware at all.

For audio enthusiast consumers who want better than what onboard offers for some reason, an internal card is only useful for desktp computers, whereas external DACs are uasable on mobile devices and laptops - and that's what most people use.

For audio producers, an internal soundcard can't physically fit the I/O you need. 1/4" jacks, XLR, that sort of thing, so older professional cards all had breakout boxes. If you're going to have a breakout box anyway, might as well have the whole device be self-contained and plugged in through Firewire (or, more recently, USB or TB). Depending on your setup, you might even have these things rack-mounted.


In the early 90's, if you wanted audio you had to buy a card. Of course, you compared and you wanted the best for your bucks, so instead of the cheapest $30 you could get a Sound Blaster for $50 with full duplex and a MIDI port, it sounded like a great deal. Never used the MIDI, rarely the mic.

We bought cards with capabilities we never needed, so when motherboards came with integrated audio that allowed us to plugin the headphones, we forgot about audio cards. It's now a product for pros.

I remember buying extra cards for ethernet, for interfacing SATA drives, for USB ports... All of them got integrated in the motherboard. The only card that seems to hold strong is the graphics.


There's still a lot of interest in the retro computing community - keropi and marmes (http://pcmidi.eu) have built a few cards, including what many view as an ultimate retro sound card, the Orpheus, which uses some OG components such as the YMF289B OPL3 and has wavetable daughterboard expandability. While it's new, it's true to the character and spirit of the 1990's cards, with the benefit of modern manufacturing and better quality components than Creative were using at the time. Looking forward to the Orpheus II! https://www.vogons.org/viewtopic.php?f=62&t=88957


Improvements in architecture, bus speeds and external ports mean there now no need for audio to be handled by an internal card; Thunderbolt or USB is more than adequate. This has moved the focus to "external soundcards", more commonly referred to as audio interfaces.


tl;dr Higher integration and faster CPU's killed them.

In the early days of PC's most EVERY peripheral was provided by an IO expansion card save for the keyboard. My 386 had a 16-bit multi-IO ISA board that provided the essential coms ports: ATA, floppy, serial, parallel. You purchased a VGA card and then a sound card. You had at least 2 or three ISA cards because your motherboard was taken up by all the CPU, FPU, RAM, and essential control logic chips. My second 486, a DX2 66MHz, had the ATA, serial, parallel and floppy ports on-board which amazed me as it eliminated a whole ISA board. (Now everything fits on a single silicon die...)

Early on-board audio was usually a sound card soldered the the motherboard. Then Intel developed AC'97 which integrated standard audio into the south-bridge. Coincidentally that made Microsoft's life easier as all the PC's they were running on would use this standard meaning all they had to do was provide AC'97 drivers and everyone with an Intel machine had sound. No more competing 3rd party audio api's from Creative, et al. it was all Wintel. PC builders could now provide multimedia PC's for cheaper prices with audio by default. Also, USB happened which allowed people to plug in things without opening cases and fiddling with circuit boards which is alien to many.

And as CPU's became faster, the need for dedicated DSP processors to handle audio processing or synthesis is eliminated. You can now run a whole DAW complete with synth, sample playback, effects and mixing in real-time on a cheap general purpose off the shelf computer with no special hardware.


It went from $30 PCI card with a $10BOM to a <$1BOM chip directly on the motherboard.


As stated by others, external sound "cards" (boxes would be a better term) are still prevalent in pro-audio applications, and may have evolved from internal cards not only for the convenience of plug-and-play, but to allow accessing cables more easily than diving behind a PC tower.

There are still "cards" being made for pro-audio users, that embed DSPs for computing plugin algorithms [1]. Not quite the same application, but an interesting parallel.

[1] https://www.avid.com/products/pro-tools-hdx


The last pci soundcard I had was the Razer Barracuda AC-1 [1] which I really wish I had kept now. Paired with the headset it was made for, the Barracuda, it was at the time a pretty amazing piece of hardware for the price. I think software is just eating lots of things...

1. https://www.phoronix.com/review/590

2. https://www.modders-inc.com/razer-barracuda-hp-1-8-channel-g...


I have an Asus STX II dedicated sound card and it is a very substantial improvement over built in sound cards, to the point i'd never consider having a PC without one. Most people have just never tried anything better and thus live under the illusion that the built in sound cards are "good enough", rather than the lowest common denominator.

Lack of marketing, maybe? A myth that you need to fork over thousands for a complicated audiophile setup to get very incremental improvements over the baseline? Probably a combination of these is to blame.


desktop decline. people got used to the sound quality of laptop, smart phone and tablet. there's also AirPods type of headphone for people who want to enjoy their music via laptop, smart phone and tablet.


I think your comment hits the mark. People now listen to their music off their phones (headphone or bluetooth speakers), or on the go in their car. Nobody sits down to a desktop to listen to music.

Don't forget about bluetooth speakers, etc where the source is your phone or laptop.


Like others have said, it's just people who want that kind of thing don't do cards any more, they do external "audio interfaces".

I have a Scarlett attached to the underside of my desk with a pair of Sennheiser HD600 headphones and a Monoprice Stage Right condenser mic on a desk stand.

I'm guessing the inner-PC card space has a bit too much of a EM noise problem for people who care about quality higher than integrated sound can provide (which is pretty good these days anyway) and external devices have room for the inputs people actually want.


Does the Scarlett drive the hd600s nicely for you? (I'm assuming something like the 4i4). I'm in the process of upgrading and was looking at a Scarlett audio interface for creating mixtapes with an external DJ mixer.

I was going to use a different headphone amp with the hd600s and a modmic for general listening and calls though. Sounds like I might be able to drop that 2nd amp (I was thinking about the soundblaster x4)

I guess I'll get the Scarlett first and listen.


I have a first generation 2i2, I believe, and it is a noticeable improvement from trying to plug HD600 into a random audio jack which were obviously not able to actually drive them properly in general.

You should be fine dropping the additional headphone amp, which I think beyond the Scarlett or similar audio interface will perhaps be different depending on the amp, but not clearly making up for a deficiency in interface power.


> and the audio enthusiasts moved on to external DACs

Well, the internal ones still exist [0]. However, with higher bus speeds, external interfaces are more practical: you can connect more devices to them, and you can move them - a lot of music today is done on laptops.

[0] E.g.: https://www.esi-audio.com/products/maya44ex/


With the rise of wireless headphones, the "sound card" is actually within the headphones now, so the sound card in your machine is irrelevant.


The sound quality of those headphones is most of the time terrible.


True, but it's good enough for most purposes. Thus those who value convenience more than great sound quality use them, where as those looking for sound quality already use external DACs because built-in sound cards aren't the greatest either.


> and the audio enthusiasts moved on to external DACs and $2000 headphones

While good studio-grade audio monitors are available for less than $200 (Beyerdynamic DT990 Pro for example) I don't see what a soundcard offers that an external DAC doesn't. If anything most soundcards I find are quite lacking some much needed features compared to most DACs, which probably is the answer to your question.


Multiple tings:

Desktop PCs have become a niche, more people have laptops than Desktop PCs, so that already reduces the market for dedicated internal sound cards significantly.

Desktop PC motherboards now come with integrated DACs that are "good enough", and if you care enough that they aren't, you'd have a hard time arguing for an internal solution over an external one anyway.


Because of reasons, I wanted an optical in. So I got a sound blaster card. It worked fine but the software was kinda... bad. For something that was not engaged most of the time it would sometimes take a lot of CPU according to task manager. Not really sure what the issue was, but ended up disabling the card when I wasn't using it.


The bottleneck with sound is human ears + end device (headphones, speakers).

10 years ago even many budget sound cards had outperformed the capabilities of an average human ear and an average headphone.

There’s not much room left for growth.

As opposed to much more complex and dimensional video data.


Sound cards were a thing because DOS/Windows was a thing.

In the Mac/Amiga/non-DOS world sound was good enough very early. As soon as we said "we need sound," it was there, as long as you weren't on DOS.

The Amiga Video Blaster card was a thing, too. Your Android/iOS/PalmOS device surpassed that long ago.


Human vision is way more information heavy than human hearing. Just how our species is physically.

When the Realtek chips become good enough for audio consumers in 00s, the market for dedicated sound cards no longer exist there. Of course, they still exist for audio producers and professionals.


I have some "audiophile DACs" and they all either USB or SPDIF interfaces from the computer. So there's no need for a sound "card" -- it's now a USB or SPDIF device that plugs into the computer's port.


I was lucky enough to live the fascinating journey from PC speaker to AdLib, SoundBlaster, SoundBlaster AWE64, Roland LAPC-1 (what a f#^n piece of hardware!), Yamaha SW60 and Gravis Ultrasound. Glory days!


I can only dream of a day when dedicated video cards become just as redundant.


> I can only dream of a day when dedicated video cards become just as redundant.

They have been for years as long as you don't need high end 3D or compute capabilities. Intel's iGPUs are plenty for most normal desktop use cases and AMD's are good enough to run most mainstream games as long as you're willing to turn the detail down a bit. Not to mention what Apple and the ARM world in general have done.

Someone who's not looking to play the latest games with the settings cranked can easily go without.

The top end will always want as much horsepower as they can get, so the dedicated video card is never going away unless we figure out some new sort of computational load that takes over the accelerator market in the same sort of way that GPGPU killed standalone audio and physics acceleration.


At this point there are no dedicated video cards, there are throughput-oriented multicore floating point coprocessors that sometimes include video interfaces.


Right, it's difficult to imagine external GPUs becoming obsolete without a radical new computing technology to take their place.

For games it's easy to find useful ways to use 10x the existing GPU power, not even counting an increase in display resolution.

For massively parallel computing, the sky is the limit.


They are for typical use cases, been using intel gfx for ten years or so. Games? No.


They still exist, but they're mostly external. For example I'm using my Parasound preamplifier as a sound "card" right now (it has a super nice Wolfson DAC).


They're commodity items that eventually get folded into the main CPU to reduce costs for manufacturing. This is the majority of the market.


Cost saving? CPUs and new algorithms can do sound with low single digit cpu usage and doesn’t justify a cpu offload card.


i don’t know what you are talking about. audio interfaces never went away. they became external due to USB and firewire. and creative stopped making them. pro audio companies primarily make and sell them now as they did back then. RME and MOTU are some examples.


AC97 I. e. onboard audio chips becoming good enough for 99% of the people.


One thing not really mentioned is in threads is...software. Every soundcard used to compete by trying to offer better 3d or better effects processing than the others. Game developes had a half dozen sound sdk's they could write their game audio for (EAX, Aureal... I dunno there's probably at least half a dozen). Cards competed on features. In the pro segments, literal megabytes of midi sample space & dsp coprocessors attracted audiences. Each of these tended to have one of any number of software toolkits... so you needed games or apps to all get onboard too. Many SDKs had software fallbacks so many users would even care that they didnt have Aureal3D support or whatever.

The actual advantages of doing things in hardware quickly faded, as cpus became ever more powerful. But just as much, the various apis didnt really ascend, or their technologies (head-related transfer function) got swallowed/mainstreamed. Namely DirectSound3D (1996), which grew various hardware offloads (EAX), and eventually because DirectX Audio. The pressure to compete deflated under thus mainstreamification... few people wanted to the extra 100 miles to support bespoken fancy hardware capabilities 0.01% of gamers might have, when they got 90% there using the common denominator.

I dont actually know what folks like gamedevs do now. Both Xbox & Playstation say they tick a lot of really good 3d audio boxes, they support an array of common 3d audio standards that I dont really know that much about. I'd love to know more about hoe the gamedevs feature-sets & capabilities have evolved over time... most coverage is alas ultra consumer facing & abstract; getting some intimicacy with technically what is possible now vs then would be fascinating.

I do think Creative has continued doing some refinement on their dedicated cards, but, like, in general, I think the other answer to the question is just that we are damned near perfection. We have really smart folk telling us we're making things worse by having too high a sample rate (>=192kHz, see Monty's https://people.xiph.org/~xiphmont/demo/neil-young.html & newer excellent audio myth videos), there's so little noise left to chase out to drive SNR higher, THD is tiny. Many laptops & gaming motherboards have really really good audio outputs, which are like $4 more cost for the person making the system for nearly flawless output, and even cheap stuff has gotten quite fine (but boy can engineers still cut corners & create trashfire designs, especially on low end systems, but it's gotten harder!).

What's kind of exciting in the past half decade is that Intel realized that on-card processing needed effective commodification if it was to survive. Each chip/driver maker making their own bespoke solutions like back the 90s was going nowhere & the only effective pushback against forever doing more & more on the cpu was to make the sounnd hardware more usable, easier to implement consistently & well. To that end, they did the amazing working of making SOF (sound open firmware) which is an implementable reference firmware anyone can use & ship for sound devices. It's a community effort now, to make a orchestrator/controller that implement commom driver interfaces, & figures out how to use a slate of DSPs effectively to do thr job, which under the hood is what soundcards have been. Now everyone can work together to use these DSPs effectively & well, whatever chips eith ehatever DSPs on them you happen to have. AMD is one other noted user, I forget who else.

https://thesofproject.github.io/latest/introduction/index.ht...


People just use USB sound devices now. Things like Motu and Scarlett.


What does a sound card actually do, besides ADC and DAC?


The Aureal had hardware 3d audio which has yet to be matched today.


Onboard sound cards got good


What DSP does the OP-1 use?


If you look at the market for sound cards in 2019, you will realize that there is a rather strange trend going on. Most people are not buying sound cards simply because they are facing a decline. But at the same time, there is a crowd that swears by it that sound cards are still relevant in 2019.

True, you can get some of the best sound cards available in the market if you are willing to spend the money but one thing that most people can’t understand is that aside from a handful of the sector, sound cards are not being bought by a lot of people. This has convinced us to explore whether sound cards are still relevant or not. You can know and learn by visiting https://enterprise.affle.com/mobile-app-development Honestly, there could be a lot of reasons behind why these are being phased out. However, is the situation bad to the point that sound cards will soon be considered relics of the past?

This is what we are going to explore in today’s opinion. Onboard Audio is Getting Better Okay, let’s be honest. Part of the reason why sound cards were created in the first place because onboard audio had a lot of distortion issue mainly because the components were placed closer to each other. For the longest time, this was a huge issue that resulted in sub-par audio, and that is why many companies like Creative banked on this and created a long range of soundcards, starting from cheaper options, to the ones that cost a lot of money.

However, as time progressed, the onboard audio only improved. So much so that many companies like Asus started working on shielding the audio components onto the separate layer. This technique reduced the distortion by a great mile and the onboard audio started improving a lot, too.

Wireless Headphones are Taking Over Back in the old days, wireless headphones or any wireless peripheral was simply not good enough to match the quality and fidelity provided by their wired counterparts. However, things have changed drastically to a point that wireless technology has improved by a drastic measure. With most gamers and general users now proffering wireless technology over the wired one, there is no denying that wireless headphones are taking over. They are convenient, introduce absolutely minimal input delay, the battery life is great, and most importantly, they come with built-in audio processing technology. xternal DAC/Amp Combos Are Now Becoming the Choice Another reason why the sales of sound cards are declining is that people are now going for options like external DAC/Amp combos a lot more than they are going for sound cards. Sure, these combos are certainly expensive, but the good news is that the performance they deliver is actually a lot better than some might expect, in the first place.

For starters, a Schiit Magni and Modi combo are going to be good enough to beat pretty much every single sound card available in the market. I know that people might want to invest money on something a sound card but when you are getting better overall quality from DAC/amp combos, you do not really see the reason.

Sound Cards are Not as Versatile This actually ties into the previous point that we discussed. Simply put, sound cards are not as versatile. They were never versatile, to begin with. However, back then, the needs were not as widespread. If you are talking about internal sound cards that most people want to go with, they can only be used once they are plugged into the PCI-express slot on your motherboard.

However, when we are looking at the DAC and amp combos that are widely available in the market, you really do not need to do that. They are plug and play, most importantly, they are driver-less, and can work on pretty much any device that comes with the required ports.

Needless to say, sound cards are simply not as versatile, and that gives a lot of people the reason to stop using them and opt for something that actually serves them properly.

Hope this things will help.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: