Hacker News new | past | comments | ask | show | jobs | submit login
Spotify bitrategate: 320kbps premium quality not there yet (spotifyclassical.com)
89 points by kraymer on July 18, 2011 | hide | past | favorite | 79 comments



Solution: Everytime a track is requested by a premium member, if it does not exist in 320 then they send FLAC and the client converts it on the fly while playing it and sends it back to Spotify. Do this 3x for every track and verify that the md5 is the same on all three.

Then, the most popular songs will be available in 320, and the ones that aren't would never have been accessed in the first place.


This solves the CPU issue but I reckon bandwidth is the real issue. This solution actually increases bandwidth usage and there would need to be a lot more checks than just 3 md5's to prevent abuse.


Maybe, but this could easily be refined, e.g. by using fingerprinting. Perhaps even the fingerprint of the Flac data (assuming that fingerprinting is much cheaper than lossy compression).

However, it would be odd if CPU power is the real bottleneck here. I read the article, but can we actually be sure that licensing is not at fault? Or simply, the lack of proper management?

Given how much more revenue iTunes sales would give for the average Premium user, it could be possible that they (RIAA and others) want to keep the incentive to buy iTunes tracks (or the physical album). Not that Spotify would ever admit this, since it would effectively change their status to 'music preview/demo/shareware' provider.

The quality plus the 'disappearing tracks' issues, make me think that it is all about preserving buying incentive. At least, that's what it did for me.


Thanks for all the comments.

Personally I don't think licensing is the problem. Why would the labels let Spotify stream the new Paul Oakenfold and Beyonce album as exclusive pre-releases and, at the same time, not allow them to offer HQ streaming for those albums? What's the point? To force the audiophile users to buy 320 kbps mp3s or CDs? Many of the premium users don't even know it's not 320 kbps.


Personally I don't think licensing is the problem. Why would the labels let Spotify stream the new Paul Oakenfold and Beyonce album as exclusive pre-releases and, at the same time, not allow them to offer HQ streaming for those albums? What's the point?

The very same reason they pay some television and radio stations to broadcast particular material: to get people to buy the albums.

It's a combination of things that make Spotify subpar for many music enthusiasts. The lack of availability of lossless streams, incompleteness of the catalog in 320kbps, missing tracks on many albums, uncertainty about future accessibility of music, etc.

When Spotify was introduced in The Netherlands, I absolutely loved it, and was convinced that I'd never need to spend much more on music than 10 Euros per month (imagine what a save this is when you buy at least 4 albums per month). However, given the reasons listed above, I am now mostly using Spotify for music discovery, and still buy albums. It only helped me to make more 'accurate' purchases. As a side effect, I ended my Premium subscription, because the 2.5 hours/week, 5 plays per track is plenty enough for evaluation.


Although it doesn't make much sense, I suspect that licensing is more of a concern than bandwidth or CPU time.


In controlled listening tests most people have trouble distinguishing mp3 at 128-160 VBR kbits from the uncompressed original. 320 kbits is just a waste of bandwidth but people just assume more is better.


In a previous life I was a sound engineer. Under controlled conditions, my own listening equipment and lossless source files, with which I am familiar, I can identify 64kbps vs 128kbps (p = .01), 128kbps vs 192kbps (p = .01), 192kbps vs 256kbps (p = .03), and probably 256kbps vs 320kbps (p = .07), n = 30, LAME=3.something for all tests. If you are in Austin, you can come over and watch me do this in person.

I have no doubt that the general population may be (statistically) unable to distinguish 128kbps vs 256kbps, but that says nothing about a minority of individuals, many of whom are large music purchasers.


Fellow audio engineer here. Mp3 has this noticeable frequency dropoff at 16kHz, which makes it detectable regardless of the bitrate. Did you test mp4 (or whatever), which does not have that?

I seem to remember scientific studies that basically claimed that 192 kBit mp4 was indistinguishable from uncompressed sound.

That said, I still prefer uncompressed audio for mindful listening. For casual listening, I frankly don't care.


I did a similar test with AAC, which as I understand it (not a compression engineer) doesn't suffer from the same 16KHz problem.

64kbps vs 128kbps (p = .01)

128kbps vs 192kbps (p = .02)

192kbps vs 256kbps (p = .05)

256kbps vs 320kbps (p = .16)

This test taught me that (for my ears), AAC 256kbps is a good all-around codec for my music. You should do your own test (you might hear differently than me, apparently I hear differently than everyone in the Gizmodo and Maximum PC "studies"). But I would be surprised if it was simply a coincidence that Apple chose to standardize on 256kbps AAC, exactly the point where I have serious trouble distinguishing bitrates.


Would you like to explain how you tested this?


The current lame encoder uses a variable cutoff frequency depending on the quality setting. At the recommended "transparent" settings (-V2 or -V3) it uses a polyphase filter with transition band of 18671 Hz - 19205 Hz.


The key phrase here is "my own listening equipment". I have a much easier time discerning encoding quality on my Mackie reference monitors.


Yes, this point should be emphasized.

You should be able to discern most encoding effects on £1k/each speakers and a £500 sound card, if not then you have wasted your money.

OTOH as anyone who has tried to write music cheaply knows, distortions much larger than those typically caused by lossy compression quickly vanish on "normal" listening gear.

Of course, the people who care about encoding artifacts are much much more likely to have an expensive signal chain.


There are a small handful of "golden ears" testers that can ABX samples at much higher bitrates than average. You might be one of them. Most people that think they can do this fail to do so in a real test though.


Not the point, they shouldn't be telling people it's 320 kbits when it's not.


Perhaps not the central point but certainly relevant if people are up in arms about a non-feature.


And yet it is trivial to teach every one of them to identify 128kbit (at least) from uncompressed in just a few short minutes. And it's the kind of thing you pretty much can't un-hear. Those who want better quality have a legitimate gripe, though for business purposes related to your statement it's best to make a higher bitrate stream as a configuration option and not the default.

I think the author is on the wrong track discussing endoing times for 320kbit - it's much more likely that spotify is interested in keeping down their bandwidth costs.

A major streaming provider that I'm familiar with actually delivered streams that were 15%-20% under the quoted bitrate on many popular tracks for a few years, but only during periods of peak bandwidth consumption. It saved a significant amount of money and afaik was never detected (they no longer do so).


That said, spotify's baseline is -q5 Ogg Vorbis (true VBR at an average of around 160kps), not 128k MP3. Huge difference. VBR mode is a huge win for an codec, and Vorbis in general is less "obvious" than MP3. In particular, drums aren't a dead giveaway that the file is compressed, unlike any MP3 <256)


That depends on what you're listening to. For all my progressive/power metal, 128 just sounds terrible and empty. I'm not very well versed in the terminology, so I don't know how better to describe it. On the contrary, 320 sounds full.

My two cents.


People make a lot of those kinds of qualitative statements about sound quality, but when they actually do a rigorous A/B test they usually can't tell the difference.

I've done quite a bit of A/B testing on metal at ~128 kbits and it's very difficult to spot differences on most tracks. Modern lossy audio encoders are very, very good.


Using decent in-ear headphones (I like the Etymotic HF2), listening to Justice, I can definitely without a doubt tell the difference between 128 kbit and +256 kbit. Or more specifically, 128 kbit and lower for specific music makes me feel nauseous.

I'm guessing this type of music simply doesn't compress as well as say... Red Hot Chili Peppers.

I'm not performing a rigorous A/B test, and I can believe that I would fail said A/B test given other conditions (other speakers, other music, etc). I would love for this to be true for all conditions and save all that storage space. Unfortunately, in my personal real-life conditions, better quality does make for a better listening experience.


Are you doing the encoding and performing A/B tests yourself? There are all kinds of things that can hurt audio fidelity a lot worse than bitrate. Some MP3's are poorly encoded by some crappy shareware application. Some are transcoded from an already-lossy source. Some productions will compress better than others (supposedly, some producers actually mix and master with inevitable compression in mind).


My original source was pirated music at 128kbits. Since then, I bought it and have the 320kbit version which immediately sounded incredibly better. Occasionally I'll hear it on 128kibts or 192kbits on Pandora and such, and the difference is very noticeable to me.

My data is purely anecdotal but I feel strongly about it and would be willing to put money where my mouth is if someone wants to call me on it.


Most tracks is not enough. I don't want to lose time fiddling with optimal quality/size ratio per song. 256kbps AAC, clickety click, fits anything and makes a nice, small rip for zero overhead.


Maybe it's a technology difference, but that's the general feel I have across most tracks I've listened to. I'm not A/B testing specifically, but I was complaining about 128 kbit tracks and asking why they sounded empty, even before I knew about bit rates. Systematic Chaos, on 320 for example, sounds amazing.

I'll do some specific testing before I say anything else. :)


Audio perception is highly subject to placeo effect. If you're curious the community at http://www.hydrogenaudio.org/forums/ has worked very hard to make this a more rigorous scientific process.

It's pretty remarkable how good modern lossy encoders are really. I consider it one of the more impressive feats of software engineering of the last decade.


To my [untrained] ears, the difference between lossless and lossy compression on Ravel's Bolero is remarkably stark. The delta between 128/160 and 320 is not as clear as above, but still noticeable.

In sum, I suppose this would depend on the nature of the music being listened to.


Do you notice this difference on a music track if you don't know the bitrate? If you know that something is 128, then many people think it's worse.


Harmonics probably get cut off early.

At 128Kbps (Even VBR) I have numerous songs which end up being simply distorted and blocky. 160kbps MP3 is a minimum for those. There's simply too much information to pack in certain areas.


Most mp3 encoders do lowpass filter at those bitrates, but only at frequencies outside the hearing range of most people.


> only at frequencies outside the hearing range of most people.

That is the idea, but the reality is that past a certain bitrate, songs begin sounding weak/metallic. That bitrate is dependent upon the listener, the equipment, the song, and the codec.

I cannot give you any nice, objective numbers here since sound quality is heavily subjective. But you cannot discount subjective experience here simply because a study found that x% of the general population cannot discern the difference between 128kbps MP3s and 320kbps MP3s. My own experience is that many songs suffer with 128kbps MP3s, particularly classical. I've used at least 192kbps MP3s since I started storing my music collection on a computer.

Also consider the fact that if bitrate was irrelevant, why are the content providers tending toward higher bitrates? We can safely assume they'd prefer to act in their own interests and keep bandwidth as low as possible.


Did you do a well controlled blind test is? That’s, I guess, the relevant question. I don’t think you can trust your ears if you know what you are listening to.

I do actually recommend doing just that. It doesn’t matter who can and cannot hear what, what matters is whether you can hear the difference in a blind test. I did just that before I started buying compressed music. (I didn’t try 128kbps MP3s. I consequently don’t know whether I can hear the difference. I tried 256kbps AAC files – those were the ones I was planning on buying – and I most certainly couldn’t hear the difference.)

MP3 certainly is limited, it even has some problems that are inherent to it, not even a higher bitrate can fix those. Short, sharp sounds (think castanets), for example, are a problem.

Because of the way human hearing works, loud sounds mask quiet sounds. MP3 (and other lossy compression algorithms) use this mask to hide noise (the noise that results from compressing the audio). In order to be able to do that the algorithm has to figure out where the mask is and there is, of course, a time dimension to that mask. MP3 can’t have arbitrarily short masks, it is consequently possible that the noise that’s supposed to be hidden under a loud sound spills over to sections where everything is actually quiet. This happens when there is a short loud sound followed by silence. You know, castanets.

No high bitrate can solve that problem (it can only reduce the overall noise that has to be hidden) but newer compression algorithms (like, for example, AAC) are more flexible with their masks and don’t necessarily have the same problem.


Go to http://www.hydrogenaudio.org/forums/ and you can find all the nice objective numbers you want. The reality is that they now do listening tests at <96kbits because the encoders are too good above that threshold. Subjective is no more useful here than it is any other quantifiable, scientific application.

Providers push higher bitrates because customers think they're better and demand them.


There's also the so-called "MP3 effect", where people start to think that compressed crap is better than the lossless option.


I would be interested in what the source for this statement was.

According to recent research done specifically on high school aged students listening preferences conducted at Harman International the opposite is true: http://seanolive.blogspot.com/2010/06/some-new-evidence-that...

"When all 12 trials were tabulated across all listeners, the high school students preferred the lossless CD format over the MP3 version in 67% of the trials (slide 16). The CD format was preferred in 145 of 216 trials (p<0.001)."


From Spotify's perspective, there is a low incentive to offer unpopular songs as 320kbps streams.

If there is only a small number of peers in the network that have the high Bitrate stream (paying users with HQ enabled on desktop clients and with interest in unpopular song X), they benefits of the peer to peer networking are less likely to pay off.

However, that does not explain why they don't offer HQ for some of their more prestigious releases, though I wouldn't be surprised if the labels hand them 'shitty' 192kbps mp3s every now and then.

Interesting read about some of their p2p architecture (PDF): http://www.csc.kth.se/~gkreitz/spotify-p2p10/spotify-p2p10.p...

EDIT:

After looking at the spreadsheet, I’d wager that there is a correlation between lower popularity of tracks and being 160 kbps only. The only track out of the last 40 is Michael Jackson's This is It, and a quick look up in the Spotify client gives most of them a very low 'popularity' measure.

I mean, really? http://open.spotify.com/track/7kBDTeWty0z1MXjcH9twph


Spotify uses Vorbis for streaming and Vorbis' 320kbit is not the same as MP3's 320kbit.

In fact, MP3 has quality problems (sample/frequency resolution limit per block) that cannot be fixed at any bitrate. Moreover, re-encoding one lossless format to another (edit: not what article suggests) would further degrade quality. You'd get desired bandwidth, but not the quality.

It's a shame that bandwidth became synonymous with quality and MP3's upper limit is taken as "highest quality". 320kbit (and lossless!) WAVE sounds like a phone line! OTOH it's quite possible that Vorbis at lower bitrate has higher quality than MP3's maximum.


I believe you are referring to lossy formats, as re-encoding one lossless format to another will not degrade quality. Re-encoding the lossless format to 320kbit Vorbis is what he's requesting which will ensure minimum quality degradation while still reducing the file size.


I think the ever-ongoing war between audiophiles and "nobody can tell 128/192/256/whatever kbps lossy files from CD" supporters is irreverent here. I am not suggesting Spotify to give us a better quality that maybe doesn't mean too much for some other users, I am asking them why they didn't deliver the goods they promised more than two years ago.


You probably mean "irrelevant".


Not to start the "320kbs is better" discussion but with my Unlimited account I only heard one or two albums which sounds very compressed (spotify:album:07hc4SjPjogLqwBc7dUCiD for example: Alan Parsons). Most of the time the quality is just very good. I think the standard 160kbs is great imho.


That really depends on what you're listening to the music on.

If you want to use Spotify at home and are listening on a nice stereo, then compression artifacts are very obvious in almost everything on Spotify.

For that reason, if I'm not listening to local music I tend to listen to KEXP's uncompressed stream. It's a mere 1.4Mbps.

Which then hits on why Spotify are most likely not serving 320. "It's the bandwidth, stupid".

It's all about the bandwidth. How few people are going to hear the difference, and how much would it cost to implement? The bandwidth costs are definitely non-trivial for their subscriber base, so implementing 320 is going to hit their costs hard.

For the article linked, notice the blog title, Spotify Classical.

Classical music really does show up artifacts in compression like Hip Hop, Pop and Rock (and Prog Rock) simply doesn't. The strings and low bass both exist in the upper and lower audio ranges precisely where compression is most aggressive and therefore noticeable. This isn't going to affect greatly the Beyonces of this world, but it affects some delicate string recital.

To be honest, if I were Spotify, I'd probably just rip the classical in 320 as that is a specialist crowd who probably can hear the difference and would kick up a stink. And then keep the vast majority in 160 as the vast majority aren't going to notice and wouldn't kick up a stink. If I wanted to be more intelligent, I'd write something to try and detect artifacts, and if a 160 file exhibited above a certain threshold, then I'd make that a contender to be at 320... thus trying to find a sweet spot between quality and bandwidth costs.


To give a little context: I listen to everything from classical music to noise and use good equipment.

You make it sound like classical music is the only kind of music where quality is important. This is not true. Quality is also important in modern music. Listen to Amon Tobin or Aphex Twin for example.

Besides Ogg is not the same as MP3. It's not a "kill all low and high" format. Listening test showed that Ogg is fine for strings. Short attack times are the problem areas when using a lower bitrate.


FYI Spotify is p2p.


Agreed.

Ogg q5 (approximately 160kbps) is not 160kbps mp3. I can tell 160kbps mp3 from CD very easily, but am unable (headphones, work PC) to tell 160kbps Ogg files from the original CD tracks.

Very unscientific I know, but still...

That said, if they say Ogg q9 it should be Ogg q9.


If you can reliably A/B test properly encoded 160kbit mp3s you have exceptional hearing and you should lend your services to the lame tuning team ASAP. I'll warn you though, these kinds of claims very very rarely hold up to a rigorous test now.


> If you can reliably A/B test properly encoded 160kbit mp3s you have exceptional hearing and you should lend your services to the lame tuning team ASAP.

there's a difference between being able to reliably distinguish them for any kind of track, and occasionally noticing compression artifacts in certain parts of certain tracks.

the first ability would indicate exceptional hearing and material for the Lame tuning team, while the latter probably means you're useless to that tuning team, but you will still get less enjoyment from lower bitrate streams, some of the time.

I'll have to check those hydrogenaudio forums to see what "properly encoded" exactly means, btw. Because if, as you said above, encoders are just too good above 96kbps [for people to tell the difference], those must be some pretty sweet magic presets, cause I haven't been able to get that quality at such bitrates when encoding mp3s or ogg myself. It's not that I don't believe it btw, from what I've seen the hydrogenaudio people know what they're talking about. It's surprising though, that apparently some commandline switches can make such a world of difference.

However, what is relevant to this discussion, is not how a 160kbps mp3 sounds when "properly encoded", but how a 160kbps ogg sounds when encoded the way Spotify does it [which may or may not be optimal]. And to determine whether people are making a big deal out of something they ultimately can or cannot hear, they don't need to be able to reliably A/B test everything, it's enough if they are able to notice the lower bitrate some of the time for some tracks, even a littlebit is already enough to get that itchy feeling you are not quite getting what you paid for, and that alone can put a big damper on your enjoyment of the entire stream (even the parts that sound just fine!).


Properly encoding an mp3 is just a matter of using a recent version of lame and using one of the -Vn presets. It's dead easy but a lot of people don't seem to know this somehow. Ogg is a little trickier since some of the best tuning tweaks haven't been folded into the mainline code base yet.

There will probably always be "killer samples" that sound bad at any lossy bitrate. If you're really worried about this you're better off just going lossless instead of wasting tons of bits on across-the-board 320 kbit encoding.


Well I also can hear the artifacts of a 160kbps Mp3. But I can't when listening to a 190kbps MP3 or 160kbps Ogg.

I'm not sure Spotify changed it recently but to me it seems clear not all tracks are in 320kbps:

"To be precise, you can stream music at a higher bitrate of up to 320kbps on your computer (not all tracks are currently available in high bitrate)."


Have you done a proper A/B test or is that just your impression from listening? A lot of people go to http://www.hydrogenaudio.org/forums/ claiming they can ABX mp3 at 160kbit. Usually they can't.


Just my impression from listening. Most of the time you can hear it with cymbals. But I agree it's very hard to hear and I just heard it a couple of times. So I think 160kbps is enough to enjoy music.

But (this is going to get a little of-topic) I also think you are not only using your ears to listen to music. People can't hear frequencies below lets say 30Hz. But you can feel it. So yeah, maybe vinyl does feel better than an Ogg or Mp3 file.


Vinyl is a completely different medium with very different characteristics. Subjectively people might prefer it but it's not because it's higher fidelity in any real sense.


Lower range of hearing is MUCH lower than 30hz. The low E on a std 4 string bass is only 34hz, and most people can here at least an octave below that. 10-12hz is more of an accurate cutoff.


12Hz is the lowest humans can hear (under perfect conditions). But when we get older those frequencies are very hard to hear. That's why I thought 30Hz would be a nice figure. But maybe I should have used 20Hz for it's the lowest frequency on a CD.


lol; touché... but I did say 'very unscientific'.

So are you saying that a properly encoded mp3 can match a properly encoded ogg at a given bit-rate?

Cause in my (again very unscientific) experience mp3 is a lesser codec. Or am I running into a lot of improperly encoded mp3s?


Ogg can often achieve transparency at lower bitrates than mp3, but we're talking <128 kbits here. Above that, and certainly at 160+ very few people can tell the difference, and even then only with careful, focused A/B scrutiny.


"Give us the snakeoil you advertised no matter if it is actually 'better'"


It's not like high bit rate is the only feature of Spotify Premium. It's definitely a good feature but I ordered premium without even knowing about the high bit rate option.


I'm impressed by Spotify. 20 hours of the same sort of unlimited free music as what.cd and waffles.fm. 50 million Spotify users in the first year seems like a possibly not that unrealistic prediction for them to make. Perhaps Spotify motivates the record companies to launch and relentlessly promote their own Hulu for music. I don't think the record companies can react that quickly though. Maybe a few Swedish entrepreneurs just took quietly over (or become the prime influencers of) the US record industry.

@Daniel Ek: I'll sign up for the $9.95/month plan when you have 95% of your music available at ~192 kb/s OGG. I just want the slightly more bandwidth that ensures I can't tell it apart from CD and it isn't much more bandwidth.

Actually, what I really want is one of those menus where you got to choose your own encoding like allofmp3.com had.

Edit: Actually, these ads are really annoying. I guess I have to sign up. Damn you, compelling product.


well, I guess it can really be only due to bandwidth reasons. It is a lesser known fact, that Spotify (in the same way as Skype), is relying on P2P behind the scenes.

Often, the music you listen to is streamed from other users, and so bandwidth load is a very crucial factor. Looks like Spotify can't afford harder load to their servers and they're gradually increasing the requirements.

But marketing their whole library as 320kbps is really disappointing. As a premium subscriber, I was under impression that it is what I was getting.


It should definitely tell you something that he had to check file sizes to determine if the song was 320 kbps.

Spotify knows most users just don't care - so they don't really care either.


So... You are suggesting me to write this entire report based on "from what I heard, about 70% of the 100 tracks I auditioned are probably not at 320 kbps"?...


Well, it would have been interesting. There is, of course, quite a bit of controversy over what bitrates a normal person can distinguish. You make it sound like Spotify is being unreasonable, but I think they're being completely reasonable - if people can't hear the difference, why should they devote a large amount of their operational capacity towards making sure all their library is 320kbps?

I'm guessing Spotify would need to receive the 320kbps version from the label for it to be legal/work with their licensing. Can you imagine working with record labels? These are the guys that brought you the RIAA...working with them must be like pulling teeth. And doing all that work for almost no appreciation from your users (who can't tell the difference?).

If the listening experience is the same, your perception of the end product is the same - does it really matter? I mean, bad on Spotify for false advertising and everything, but I wouldn't chalk up their behavior as being a big deal.

I understand that for someone with good taste in music, like yourself, this matters. But for 90% of Spotify's users, I'll bet it doesn't.


Yes I understand, actually I know quiet a bit about how dinosaur like RIAA/IFPI make their lives:P

But as I quoted in my article, Spotify staff admitted that they got music in lossless from labels, and have the rights to convert and stream them in HQ.

To be frankly I know how this will end: most users don't care and the story sinks into nowhere. For my own interest I don't even want to do this, @Spotify tweeted about my blog about eight times since last year, and I'd assume it won't happen again after this post. The US launch tripled the traffic of my site, a post like this will only bore the new visitors away. I did this because I believe it's the right thing to do, and I still have faith in Spotify. That's all.


I really don't understand how someone can complain on Spotify. It's a great service with very good sound quality at a decent price.


Yeah, if you're looking for the best quality, maybe streaming services are not your cup of tea.


Or MP3s, for that matter


Spotify streams OGG. While MP3 160kpbs "sucks", OGG 160kps is pretty decent.


MP3 (at last lame mp3) is actually very good at 160kbits.


hence, I think, GP ironic quotes regarding its quality, putting emphasis on the decentness of OGG.


It is indeed a decent service, and I guess it was worth the money I've paid for it, but there certainly are a bunch of flaws. I miss a lot of my personal music library in it, for example, (and yes, while you can add them, but I pretty much only use/used spotify at work). To be honest, I didn't even remember to enable the HQ option when I had a paid subscription. I also use it on gnu/linux, where the windows client under wine for me is very sluggish (but only at work, not at home), and the native client still has ways to go.

Anyway, I seem to have gotten quite sidetracked, the point I was trying to make is that at least OP is giving them direct feedback and thus an opportunity to improve. While you think it's a great service, surely you can find some ways to improve it? I could, for example, live with a larger cache on the android app so that I can enter a store under ground without the music stopping when the cell service does...


It's always room for improvement on any service, but it's not really fair to complain that a service costing $10 a month is not suitable for critical classical listening on a stereo that cost $10.000.

Best step IMHO is to push more online music retailers to sell lossless files for critical listening or maybe Spotify could sell it, hopefully for more than $10 a month.


Eh, my stereo equipment didn't cost anywhere near ten grand, but I wager it can still show the difference between 160 and 320 kilobits of compressed music. From what I gathered from the op, though, it's not so much about the practical difference than the fact that he feels he's paying for something he's not actually getting - and that's something that is fair to complain about.


What are you missing in the native linux client?


Well, for example, an ncurses interface would be nice. That way I wouldn't need to use NX or other RDP software if I wanted to use spotify from my jukebox computer.


If you have Premium, you can create your own ncurses interface with libspotify: http://developer.spotify.com/en/libspotify/overview/

Depending on the functionality you need, you can also use DBus with the native client. We implement parts of the Mpris2 specification. If all you need to do is change tracks and play/pause on your jukebox machine that should work with a few simple scripts.


Some of the albums I've been listening to (with HQ enabled) have raised red flags, I've recognized compression artifacts. But I dismissed this as being deliberate, a weird sounding microphone, an effect, or whatever, since the HQ box was checked in Spotify settings.

Time to investigate..


what about rdio? Does anyone know?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: