Hacker News new | past | comments | ask | show | jobs | submit login
New Ultra Fast Lossless Audio Codec (HALAC) (hydrogenaud.io)
233 points by HakanAbbas on Jan 2, 2024 | hide | past | favorite | 181 comments



I love how some announcements of developments still occur in good old-fashioned forums; it really warms my heart to see that. Kudos to the author for the hard work!


Honestly this is one of the best technical audio forums. I always really appreciated how they take their rules seriously, like

> TOS 8. All members that put forth a statement concerning subjective sound quality, must - to the best of their ability - provide objective support for their claims. Acceptable means of support are double blind listening tests (ABX or ABC/HR) demonstrating that the member can discern a difference perceptually, together with a test sample to allow others to reproduce their findings. Graphs, non-blind listening tests, waveform difference comparisons, and so on, are not acceptable means of providing support.


Thank you so much.


There's no source? It's very hard to take this seriously without source code...

He explicitly says it's not SIMD, which is nice because it rules out a way of cheesing it, but still...


Yeah, I was about to click and read and it occurred to me that this is only of possibly theoretical interest and I'll know about later if it comes to matter.

There are some areas that I follow as-it-develops, but codecs and data compression is one that I'll use when ready. Still awaiting widespread av1 support/adoption.

The area most needing improvements IMO is with Bluetooth, especially Apple's support of codecs (where they're dropping support that worked in older macOS versions).


Pretty much any low level tech is at most a theoretical curiosity to me. I'll use it when it works in every browser and OS and average people recognize the file extension. Unusual tech seems to attract unusual bugs!

Still really really impressive to beat an established standard as an individual, that doesn't happen much.


Thank you for your good thoughts


At this stage, what is taken seriously is quite personal. If the source code is a requirement to take it seriously; MS Windows, MS Office, Adobe, Autodesk, Oracle, Winrar and thousands of more wonderful software should not be taken seriously.

Some of the SIMD optimization make compilers automatically. The others can be manually. These speeds can be obtained without using SIMD. Then there may be more with manual SIMD. I think this is what should be loved.


> MS Windows, MS Office, Adobe, Autodesk, Oracle, Winrar and thousands of more wonderful software should not be taken seriously.

Apples and oranges. I don't need word processing software to be open source to understand how it works. A proportedly novel compression algorithm is a different story...

I can be totally honest with you: FLAC being open source is more valuable to me than any performance benefit you could ever possibly offer over it. It only becomes interesting if I can actually read the code and see what you did.

I am genuinely interested in what you've done here, and I sincerely hope you publish it.


Hmm. Of course, we don't need to know how Oracle is fast and secure, why Autodesk is a monopoly in the industry, winrar is still used a lot despite being paid, and how Adobe's artificial intelligence-powered filters work.

I am developing HALAC and HALIC as a hobby and I don't expect everyone to use them. I'm happy when I can get good results, and it's bad when I can't. I say this as someone who has been dealing with data compression for 9 years.


I think the results are very interesting: just to reiterate, I would love to see what you've done here, and I hope you publish it.

Obviously, it's your right to decide... but especially if you think of it as a hobby, why not release the source? It would make your work much more valuable for a lot more people.


Frankly, if I publish open sources now, I can't take care of them again. Because there will be no excitement. I say this because I know myself very well.

When I bring my work to a certain stage, I would like to deliver it to a team that can claim it. However, I want to see how much I can improve my work alone.


Autodesk has a monopoly because, oh, web browsers don't have to open Autocad drawings.

The amount of media playback and serving software out there is innumerable. If most of it doesn't handle some obscure format, that format is screwed.

Getting a new format everywhere is a difficult battle; the adoption barriers are high. Even if the thing is completely royalty free, and comes with a great, open source reference implementation.

Something that is closed, and has no backing of some corporate consortium or ITU type body or whatever, is basically fucked.


You say it's not worth spending time on such work, discovering new things and pondering. I understand.


No reasoning process rooted in reading comprehension can come to the conclusion that I wrote such a thing.


Yall should really have a look at his lossless image format. That has a spectacular compression:speed ratio. https://encode.su/threads/4025-HALIC-(High-Availability-Loss...

What interests me the most is the memory usage.


Yes, the HALIC uses quite low memory. Because independent small blocks work. Of course, this situation negatively affects the compression rate. It is a fact that I compromise on the compression rate to ensure less memory consumption.


[flagged]


I don’t think it’s valid to exclude open source contributions.

Not everyone supports their political representatives.


Hakan; Ignore the open source stuff for now. I speak as someone from within the industry. You may not listen to me and be upset later. If those who say this had done even half the work you did, be sure they wouldn't be talking like this.

It seems that there is a serious effort. And your work clearly crushes other codecs. This will cause serious discomfort in some circles. I don't know much about HALIC, but it is said that it is also a very serious work.

There's no point in rushing for open source. First of all, improve your work as much as you can. If serious offers come later, you can evaluate them. Don't give up control.




Cool toy and a nice piece for the CV perhaps, but it is difficult to take it seriously if you refuse to offer source code or a implementable specification.

I would give you the benefit of the doubt that it might just be code shyness or perfectionism about something in its early stages, but it looks like the last codec you developed (“HALIC”) is still only available as Windows binaries after a year.

I struggle to see an upside to withholding source code in a world awash with performant open source media codecs.


Maybe it’s just me, but every lossless codec that’s:

1. Not FLAC

2. Not as open-source as FLAC

comes across as a patent play.

FLAC is excellent and widely supported (and where it’s not supported some new at-least-open-enough codec will also not be supported). I have yet to see a compelling argument for lossless audio encoders that are not FLAC.


FLAC’s compression algorithm was pretty much garbage when it came out, and is much worse now compared to the state of the art. Even mp3 + gzip residuals would probably compress better.

FLAC doesn’t support more modern sampling formats (e.g. floating point for mastering), or complex multi channel compression for surround sound formats.

There just isn’t something better (and free) to replace it yet.


> There just isn’t something better (and free) to replace it yet.

Apple's ALAC (Apple Lossless Audio Codec) format is an open-source and patent-free alternative. I believe both ALAC and FLAC support up to 8 channels of audio, which allows them to support 5.1 and 7.1 surround. https://en.wikipedia.org/wiki/Apple_Lossless_Audio_Codec#His...

These are distribution formats, so I'd be surprised if there were demand for floating-point audio support. And in contexts where floating point audio is used, audio size is not really a problem.


When FLAC compresses stereo audio, it does a diff of the left and right channels and compresses that. This often results in a 2x additional compression ratio because the left and right channels are tightly correlated.

Unless things have changed substantially and I missed it, FLAC does not do similar tricks for other multichannel audio modes. Meaning that for surround sound, each channel is independently compressed and it is unable to exploit signal correlation between channels.

Proprietary formats like Dolby on the other hand do support rather intelligent handling of multichannel modes.

FLAC is not solely a distribution format. Indeed as a distribution format it sucks in a number of ways. It is chiefly used as an archival format, and would in fact be ideal as a mastering format if these deficiencies Could be addressed.


In what ways does flac suck for distribution? All the music I download from Bandcamp is in that format, it works great for me.


It could be much smaller, maybe 2-3x better compression. Better support for surround sound / multichannel audio. If an AAC stream were used for the lossy predictive stage, then existing hardware acceleration could be used for energy efficient playback.


How would 2-3x better compression be achievable?

I don't use or desire multichannel audio but that and the hardware acceleration are interesting points.


FLAC uses 1970’s era compression technology for both compression stages (lossy and residual) in order to conservatively avoid patents in the implementation. Just replace the lossy component with AAC, which is now out of patent protection, and replace Rice coding for the residual with the much better (but was still patented in the 90’s) arithmetic coding. Those two changes should get 2-4x performance improvement, as well as hardware accelerated encoding and playback as a free bonus.

Multichannel audio support is nice because it is often used in distribution of media files sourced from DVD/BluRay. It would be good to have a high quality, free codec for that use.


Thanks. I would love to see a PoC of this making my music files that much smaller.


> FLAC’s compression algorithm was pretty much garbage when it came out, and is much worse now compared to the state of the art. Even mp3 + gzip residuals would probably compress better.

MP3 is a lossy format so I would practically guarantee that you’d end up with a smaller file but that’s not the purpose of FLAC. Lossless encoding makes a file smaller than WAV while still being the same data.

> e.g. floating point for mastering

I’m 0% sold on floating point for mastering. 32bit yes, but anyone who’s played a video game can tell you about those flickering textures and those are caused not by bad floating point calculations, but by good floating point calculations (the bad part is putting textures “on top” of each other at the same coordinates) . Floating point math is “fast” but not accurate. Why would anyone want that for audio (not trying to bash here, I’m genuinely puzzled and would love some knowledgeable insight)


> MP3 is a lossy format so I would practically guarantee that you’d end up with a smaller file but that’s not the purpose of FLAC. Lossless encoding makes a file smaller than WAV while still being the same data.

You misunderstood what you are replying to. FLAC works by running a lossy compression pass, and then LZ encoding the residual. The better the lossy pass, the less entropy in the residual and the smaller it compresses. FLAC’s lossy compressor pass was shit when it came out, and hasn’t gotten any better.

Flickering textures is caused by truncation and wouldn’t be any better with integer math. The same issues apply (and are solved the same way, with explicit biases; flickering shouldn’t be a thing in any quality game engine).

Floating point math is largely desired for mastering because compression (technical term overloaded meaning! Compression here means something totally different than above) results in samples having vastly different dynamic ranges. If rescaled onto the same basis, one would necessarily lose a lot of precision to truncation in intermediate calculations. Using floating point with sufficient precision makes this a non-concern.


> FLAC works by running a lossy compression pass, and then LZ encoding the residual.

Since when does FLAC run a lossy pass? You can recover the original soundwave from a FLAC file, you can't do the same with an MP3.

I'm pretty sure FLAC does not run a lossy compression pass.

Flickering textures in game engines are likely due to z-fighting, unless you're referring to some other type of flickering.

If you're looking to preserving as much detail as possible from your masters then floating points make sense. But its really overkill.


> The FLAC encoding algorithm consists of multiple stages. In the first stage, the input audio is split into blocks. If the audio contains multiple channels, each channel is encoded separately as a subblock. The encoder then tries to find a good mathematical approximation of the block, either by fitting a simple polynomial, or through general linear predictive coding. A description of the approximation, which is only a few bytes in length, is then written. Finally, the difference between the approximation and the input, called residual, is encoded using Rice coding.

Linear predictor is a form of lossy encoding.


LPC is lossy, but FLAC maintains enough information to be able to reproduce the original data. Therefore its lossless even though LPC is a part of the compression.


Yes exactly. What you’re saying lines up with what I’ve learned through experience.

> If you're looking to preserving as much detail as possible from your masters then floating points make sense.

I’ve been searching for hours and gotten nothing more than the classic floats vs ints handwaving. Can you explain what you know about why using floats preserves detail?


Do you actually have experience writing a FLAC encoder/decoder? I do. Go read the format specification. There is a lossy compression pass, then it uses a general compressor on the residual after you subtract out the lossy signal. The two combined allow you to reconstruct the original signal losslessly.


what do you suggest instead?


I suggest that people who care enough about these things (not me, I’m just informed about it), come together and make a new lossless encoder format that has feature parity with the proprietary/“professional” codecs.


what codec are you suggesting is better, and how much better is it? unless encoders have wildly improved, alac's from apple is not better than flac. ape and wavpack seems to do a bit better, but not much


Support for >8 channels led me to use WavPack instead of FLAC.


What's the use case?


You are right about this. But there are things I should add to Halic and Halac. When I complete them and realize that it will really be used by someones, it will of course be open source.


One of the cool things about open source is that other people can do that for you! I've released a few bits of (rarely-used) software to open-source and been pleasantly surprised when people contribute. It helps to have a visible todo list so that new contributors know what to aim for.

By the way, there will always be things to add! That feeling should not stop you from putting the source out there - you will still own it (you can license the code any way you like!) and you can choose what contributions make it in to your source.

From the encode.su thread and now the HA thread, you've clearly gotten people excited, and I think that by itself means that people will be eager to try these out. Lossless codecs have a fairly low barrier for entry: you can use them without worrying about data loss by verifying that the decoder returns the original data, then just toss the originals and keep the matching decoder. So, it should be easy to get people started using the technology.

Open-sourcing your projects could lead to some really interesting applications: for example, delivering lossless images on the internet is a very common need, and a WASM build of your decoder could serve as a very convenient way to serve HALIC images to web browsers directly. Some sites are already using formats like BPG in this way.


> One of the cool things about open source is that other people can do that for you!

This is a very valid point, but we should all recognise that some people⁰ explicitly don't want that for various reasons, at least not until they've got the project to a certain point in their own plans. Even some who have released other projects already prefer to keep their new toy more to themselves and only want more open discourse once they are satisfied their core itch is sufficiently scratched. Open source is usually a great answer/solution, but it is not always the best one for some people/projects.

Even once open, “open source not open contribution”¹ seems to be becoming more popular as a stated position² for projects, sometimes for much the same reasons, sometimes for (future) licensing control, sometimes both.

--

[0] I'm talking about individual people specifically here, not groups, especially not commercial entities: the reasons for staying closed initially/forever can be very different away from an individual's passion project.

[1] “you are free to do what you want, but I/we want to keep my/our primary fork fully ours”.

[2] it has been the defacto position for many projects since a long time before this phrase was coined.


> I/we want to keep my/our primary fork fully ours

The "primary" fork is the one that the community decides it to be, not what the authors "wants". Does it really matter what is the "primary fork" for those working on something to "scratch their own itch"?


Hence I said my/our primary fork, not the primary fork.

If I were in the position of releasing something⁰: the community, should one or more coalesce around a work, can do/say what it likes, but my primary fork is what I say it is¹. It might be theirs, it might be not. I might consider myself part of that community, or not.

It should be noted that possibility of “the community” or other individual/team/etc taking a “we are the captain now” position (rather than “this is great, look what we've done with it too” which I would consider much more healthy and friendly) is what puts some people off opening their toy projects, at all or just until they have them to a point they are happy with or happy letting go at.

> Does it really matter what is the "primary fork" for those working on something to "scratch their own itch"?

It may do further down the line, if something bigger than just the scratching comes from the idea, or if the creator is particularly concerned about acknowledgement of their position as the originator².

--

[0] I'm not ATM. I have many ideas/plans, some of them I've mused for many years old, but I'm lacking in time/organisation/etc!

[1] That sounds a lot more combative than I intend, but trying to reword just makes it too long-winded/vague/weird/other

[2] I wouldn't be, but I imagine others would. Feelings on such matters vary widely, and rightly so.


I don’t get it. What the community does has no bearing on your fork, so why do you care? You can open source it and just not accept patches. Community development will end up happening somewhere else, but who cares?


> I don’t get it.

Don't worry. You don't have to.

If you want a more specific response than that, what part of the post do you not get?


Whatever position you are trying to argue seems to be so antithetical to Free Software, I'd say those sharing this view are completely missing the point of openness and would be better off by keeping all their work closed instead.

> other individual/team/etc taking a “we are the captain now” position rather than “this is great, look what we've done with it too”

The scenario is that someone opens up a project but says "I am not going to take any external contribution". Then someone else finds it interesting, forks it, that fork starts receiving attention and the original developer thinks to be entitled to control the direction of the fork? Is this really about "scratching your own itch" or is this some thinly-veiled control issue?

I'm sorry, after you open it up you can't have it both ways. Either it is open and other people are free to do whatever they want with it, or you say "it's mine!" and people will have to respect whatever conditions you impose to access/use/modify it.

> if the creator is particularly concerned about acknowledgement of their position as the originator.

That is what copyright is for and the patent system are for those who worry about being rewarded by their initial idea and creation.

If one is keeping their work to themselves out of fear of losing "recognition", they should look into the guarantees and rights given by their legal systems, because "feelings on this matter" are not going to save them from anything.


> Is this really about "scratching your own itch" or is this some thinly-veiled control issue?

I wasn't attempting to veil it at all. It is a control issue for some.

Sometimes someone is happy to share their project, but wants to keep some hold on the core direction.

> > other individual/team/etc taking a “we are the captain now” position rather than “this is great, look what we've done with it too”

The scenario is that someone opens up a project but says "I am not going to take any external contribution". Then someone else finds it interesting, forks it, that fork starts receiving attention and the original developer thinks to be entitled to control the direction of the fork?

You are missing a step. I said that if someone has this concern then they might not open the project at all, until they feel ready to let go a bit. At that point “open source but not open contribution” and control over forks are not issues at all because the source isn't open and forking isn't possible.

> That is what copyright is for and the patent system are for

I don't know about you, but playing in those minefields is not at all attractive to me, and I expect many feel the same. If I had those concerns, and legal redress is the solution, I now have two problems and the new one is a particularly complex beast, it would be much easier to just not open up.


> I wasn't attempting to veil it at all. It is a control issue.

Then do not hide it behind the "people just want to scratch their own itch". It is a bad rationalization for a much deeper issue and the way to overcome this is by bringing awareness to it, not by finding excuses.

> wants to keep some hold on the core direction.

You are really losing me here. The point from the beginning is that the idea of "direction" is relative to a certain frame of reference. There is no "core" direction when things are open. The very idea of "fork" should be a hint that it is okay to have people taking a project in different directions.

> it would be much easier to just not open up.

Agreed. But like I said: you can not have both ways. If you want to "keep control" and prevent others from taking the things in a different direction, then keep it close but be honest to yourself and others and don't say things like "it's not ready to be open yet" or "I want to share it with others but I worry about losing recognition".


> Then do not hide it behind the "people just want to scratch their own itch"

You seem to be latching on to individual sentences in individual posts rather than understanding the thread from my initial post downwards. Start from the top and see if that changes.

Right from the beginning I was walking about people not releasing source for this reason, not releasing with expectations of control – while quoting more of the preceding thread might have made that sentence look less like an attempt to hide as you see it, that would bulk out the thread necessarily IMO (and I'm already being too wordy) given that the context is already readily available nearby (as the thread is hardly a long one).

> > it would be much easier to just not open up.

> Agreed. But like I said: you can not have both ways. If you want to "keep control" and prevent …

No, but the other end of the equation often wants the source irrespective of the project creator not being ready to let go of fuller control just yet (for whatever reason, including wanting to get to a certain point their way to stamp their intended direction on it). And they will nag, and the author will either spend time replying to re-explain their (possibly already well documented) position or get a reputation for not listening which might harm them later.


I don't want to keep this conversation going in circles, but to me it seems like you are trying to explain a behavior (some people do not want to release source before conditions X, Y and Z are met) and I am arguing that the behavior itself is antithetical to FOSS.

From the top of the thread: "it is difficult to take it seriously if you refuse to offer source code or a implementable specification.". If OP has reservations about building it the open, I'd rather hear "I am not going to open it because I want to keep full control over it" then some vague "I will open it after I complete some other stuff".

You mention the concern about "getting a reputation for not listening". To me, this has already happened. The moment I saw "when I realize it can be used by someone, it will be of course be open source", I'm already doubting his ability to collaborate, I already put him in the "does not understand how FOSS work" box and I completely lost interest in the project.


> Frankly, if I publish open sources now, I can't take care of them again. Because there will be no excitement. I say this because I know myself very well.

> When I bring my work to a certain stage, I would like to deliver it to a team that can claim it. However, I want to see how much I can improve my work alone.


Sorry, this is exactly why it seems that you don't understand FOSS.

1) Publishing the code does not mean that it is done. Software development is a continuous effort.

2) There is no "delivering it to a team that can claim it". When (if?) you release your code, you will see the possible outcomes:

- the worst case scenario, someone will find an issue on your design and point to a better alternative and you will be left alone with your project.

- The best case scenario, your work brings some fundamental breakthrough and you will have to spend a good amount of time with people trying to figure it out or asking for assistance on how to make changes or improvements for their use case.

- The most likely scenario, your work will get its 15 minutes of fame, people are going to be taking a look at it, maybe star at Github and then completely leave it up to you to keep working on the project until it satisfies their needs.

Like "everythingctl" said, you will see that few people will take you seriously until you actually show source code or an reproducible specification. But you will also see that is a "required but not sufficient condition" for you to be taken seriously. And while I completely understand the fear of putting yourself out there and the possibility of having your work scrutinized and criticized for things you know need improvement, I think that this mentality is incompatible with the ethos of Open Source development and I wish more people can help you overcome this fear than tried to excuse or defend it.


> When I complete them and realize that it will really be used by someones, it will of course be open source

There is a chicken and egg problem with this strategy: Few people will want to, or even be able to, use this unless it’s open source and freely licensed.

The alternatives are mature, open or mostly open, and widely implemented. Minor improvements in speed aren’t enough to get everyone to put up with any difficulties in getting it to work. The only way to get adoption would be to make it completely open and as easy as possible to integrate everywhere.

It’s a cool project, but the reality is that keeping it closed until it’s “completed” will prevent adoption.


Hakan: if you are going to go open source just do it now. You have nothing to gain and much to lose by keeping it closed.


Maybe he is just waiting for the right investor that has a purpose for the codec so he can reinburse his time investment.

Making it opensource now would just ruin that leverage.

I am with you OP


Looking at history, it seems trying to build a business model around a codec doesn't tend to work out very well. It's not clear what the investor would be investing in. It's a better horse.


When I bring my work to a certain stage, I would like to deliver it to a team that can claim it. However, I want to see how much I can improve my work alone.


Being open source doesn't mean you have to accept contributions from other people.


When you do decide to open the codec, you should talk to xiph.org about patent defensibility. If you want it open, but don’t build a large enough moat (multichannel, other bitrates, bit depth, echo and phase control, etc then the licensing org will offensively patent or extend your creation.


Thanks for the information about the license and patent. It can work with any bitrates for Halac. However, more than 2 channels and 24/32 bit support outside 16 can be added.


I understand a forward compatibility concern, but have you considered to put an attention-grabbing alert in the encoder and clearly state that official releases in the future won't be able to decompress the output? Also your concern may have been too overblown; there have been more than 100 PAQ versions with mutually incompatible formats but such issues didn't happen too often. (Not to say that ZPAQ was pointless, of course.)


You may be trying to kill all criticisms, this is not possible. Not everyone will like you and not everyone will like your code. Fortunatly people irl that have personal differences tend to be a but more tactful than the software crowd can be online but something like this bound to get overwhelming amounts of love.

No great project started out great and the best open source projects got to their state because of the open sourcing.

Consider the problems you might be spending a lot of time solving might be someone else's trivial issue, so unless this is an enjoyable academic excercise for you (which i fully support), why suffer?


I have no problem trying to kill criticism. I'm just trying to do what I love as a hobby(academic).

Or maybe it's better for me to do things like fishing, swimming as a hobby.


Don't let perfect be the enemy of good. If Linus didn't open source Linux until it was "complete", it wouldn't be anywhere near as popular as it is.


Thank you for your valuable thoughts.


You could open it now and crowd-source the missing pieces. I really see nothing to lose by making this oss-first.


Sounds like some words in Filipino:

Halic = kiss Halac = raspy voice


You got that backwards buddy. Nobody will use them so long as they remain closed source like this.


Maybe they want to sell it to a streaming service or something.


This. It's almost ragebait posting this: "I'm better but I won't show you."


For what looks at first glance to be a potentially impactful development, this post and the "encode.su" one linked from it are extremely sparse on details.

Where is the source code? A detailed description of what the codec actually does? References to relevant publications?

All I see are two mystery Windows binaries, hosted on a forum I've never heard about. The fact that "encode.su" uses the world's most notorious domain extension doesn't inspire confidence, to put it mildly.


> The fact that "encode.su" uses the world's most notorious domain extension doesn't inspire confidence, to put it mildly.

Encode.su (formerly encode.ru) is indeed the most known forum dedicated for data compression in general. So much that many if not most notable data compression projects posted to HN are often first advertised to that forum first (for example, Zstandard [1]).

[1] https://encode.su/threads/2119-Zstandard


The fact that "encode.su" uses the world's most notorious domain extension doesn't inspire confidence, to put it mildly.

The leaders in data compression and information theory are all from the former Soviet Union, so that's no cause for concern.

I think Andrey Markov started it all.


Hydrogenaudio is well known in this area and many new prototype codecs are announced there first. Also, the lack of source control and Windows-only binaries are very much congruent to the style of development there. See it as your confrontation with a new world, because small it is not!

And later, you will learn to understand the depth of the contribution that the ffmpeg project provides :)


So people just download .exe files that they see in those forum posts and run them on their machines?

New world indeed...


That's an old world, for me. It's how the Windows software ecosystem worked and works to this day.


My goal is to try to do something practical just on data compression. I have neither knowledge nor experience about malicious software mentioned. You may be comfortable about this. https://www.linkedin.com/in/hakan-abbas-178b5852/


I was referring to the practice of downloading executables from such forums in general, not to your post specifically. I have no reason to suspect that this particular post contains any malware. But in most parts of the software world, Open Source has been the standard for experimental publications for a long, long time, and seeing a forum depart so strongly from that standard is certainly surprising, and does of course have security implications when this practice is used at scale.


You can always use a sandbox if that's the most concern. A bigger issue is that, as like other established forums, enough many people don't know much about data compression and contribute to the noise.


There are open source developers out there who literally write installation instructions like these in their READMEs:

  curl http://example.com/script.sh | bash


[flagged]


Hey, could you please edit out swipes and/or name-calling from your HN posts? This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

Your comment would be just fine without the (first clause of the) second sentence.


Yeah, I too was using the web in the late 90s. I remember countless programming and "hacking" forums where .exes were shared, and I downloaded and ran anything and everything that had a download link.

It was a fun time, and I do harbor some nostalgia for it. But I can't imagine going back.


it was okay back then though, because the cracked copy of NOD32 on your system would totally protect you =)


Corporate? That's not a sound way to describe the totality of the world of programmers that lean towards free/open and beyond.

Sure, closed-source communities exist and have for a long age, but many folk have grown beyond the ethos or tradition of closed releases for reasons like, IDK, competitive individualism for clout or potential code quality shame that AFAICT drive such corners of the software world, especially at the level of freeware, not just for business. It's certainly a new world for younger people who skipped the era when that was more prevalent.

If people don't like corporate, there are newer source available licence options that folk to the left of free/open have been advocating more recently.


Tell us more about "I don't aee anything suspicious". How exactly do you know it's not a binary that hashes all your files using a key and asks for btc to revert?


Open in hex/text editor, scroll through and look for anything suspicious like network, crypto, obfuscated sections (major red flag), strange strings, etc. The #1 most reliable sign of malware is if it's unusually large and packed/obfuscated, but this isn't.

The guy even has his full name and contact info in there.

This is harmless.

If you don't trust me you could upload to an online malware multiscanner (which tends to invite false positives, but better than nothing).


It's not about whether this particular announcement, with these particular executables, is trustworthy or not.

It's about the whole process of regularly downloading and running executables uploaded by individuals to a BBS-type forum being unimaginable in most other parts of the software world, and violating every security "best practice" written about in the past 30 years.

I know that this is how things were once done everywhere. But that was a long time ago.


Are we even in the same universe?

The vast majority of the world still downloads and runs executables uploaded by individuals, albeit perhaps not on a bulletin board or forum (most of those have been killed and replaced by social media).


This argument comes up reasonably regularly.

No, the majority of the world does not download and run binaries from non-reputable sources.

The distinction between reputable and non-reputable varies, but broadly easily spoofable user uploaded content falls into the non-reputable.

Most people download software from trust worthy websites like the official chrome website.

Indeed, the fact that people are continually scammed by this sort of attack is why Apple now refuses to run unsigned binaries by default.

To pretend nothing is wrong here is like pretending JavaScript supply chain attacks don’t exist because you don’t want them to exist.

…and yet. They do exist; wanting it not to be true does not make it so.

Likewise, downloading and running arbitrary binaries from a forum is naive.

You simply want nothing bad to happen.

That does not mean nothing bad will actually happen.

Even if you trust the authors of the posts, how reputable is the forum itself? Are the binary hashes posted? (No, they aren’t).

> I'm new in this forum

^ does not inspire confidence.


Yes, I'm new in hydrogenaud.io. However, I have been active since 2018 in "encode.su".

This year, "3rd Global Data Compression(gdcc.tech)" organized by Huawei and Barcelona Autonoma University was held. In this competition, I have the world 3rd place in the "Professional Task 6 - Ultra Fast" category(JABBAR). And I spent only 2 weeks of the 5-month competition process for this degree.

We can only share and test such a specific work in specific environments.


You are simply toeing the line of corporate propaganda, that says people must always submit to centralised authority instead of exercising their own judgement.

That is what is leading us to dystopia.

We are not "pretending", we are simply stating that the magnitude of risk is absolutely tiny.

Insecurity is freedom. Don't let them take away the latter in the name of security.

"There is nothing to fear but fear itself."


How is sharing source "submitting to a centralized authority"

I don't run any binaries if i can help it


you think the risk is tiny, but:

A) the risk exists.

B) you’ve taken no steps to verify that it’s tiny

C) you’re trusting new users just as much as well established users

D) your community is not as obscure and tiny as you imagine when it floats to the top of HN.

It’s not corporate dictatorship to say “there are bad actors out there looking to take advantage of naive users”; it’s reality.

You can refuse to acknowledge that reality, that’s your choice.

However, it’s probably irresponsible to encourage other people to do so.


take the L


A lot of malware just waits for a while and the opens another file (or a pastebin) and downloads the payload from somewhere else. A small executable without anything dodgy in it means nothing.


without anything dodgy

I said there weren't any network APIs either (whose presence in an application like this would definitely be a red flag.)

If you say their presence can be obfuscated, then let it be known that obfuscation is also very obvious in a binary and another red flag.


FWIW: I support your position and wish that there was more trust on the internet. I’m happy that some of these old-school forums still have that level of trust.

But, from a technical perspective, I think it’s naive to assume that you can easily spot obfuscation that’s trying to stay hidden. If I understood your analysis model (open in a hex viewer and scroll around), then it is quite trivial to just add a few normal-looking functions that happen do things like manually load socket DLLs and make network requests without the API names being visible.

I could even, say, hide the code or data in a table of opaque filter constants or lookup tables, and it wouldn’t have to be much: you can implement a very dumb PE parser and function loader in a couple dozen lines of C, and an IP address target is just 4 bytes which can be smuggled into anything. Open up a socket, read everything from it into an RWX buffer, jump to it and voila, a programmable backdoor. Make the trigger something random so dynamic analysis doesn’t find it immediately.

The Underhanded C Code Contest demonstrates that even with source you can hide malicious behaviour; how are you going to detect malicious behaviour in a binary that’s trying to evade manual detection?


It is still possible that the author's machine had a virus and the executable got infected without the author's knowledge. I too trust the author in that matter, but that's irrelevant here.


That's precisely why you look at the binary and not the source...


There are libraries that would be useful for cryptography that you wouldn’t likely need in an audio codec. If the binary imports those libraries, it may be visible with a bit of prodding.


Unless they are statically linked.

Or the binary uses executable compression.

Or obfuscated dynamic loading.

Or about a million other techniques that can thwart dependency analysis, and which have been well-known for decades.


And precense of those things is basically the first thing any malware heuristic looks at. Why are you so emphatically stating them as if they are news?


i think they were just examples of how simply looking at imports isn't good enough, and it's true. on the plus side, by hitting HN there are more eyes on it and hopefully more consensus on how safe/interesting this is


Also, what does the High Availability in the name of the codec refer to?

I've searched the web and found not a single mention of "high availability" in the context of audio codecs. In fact, the top-ranked result was this very post, which doesn't explain what the term is intended to mean.


Sounds like a retronym for the author’s initials :-)


Great detection :)


I wasn't really careful when making this naming. I just tried to make it a little different. I can't say that I'm successful in nomenclature.


If you need some retronym, how about "highly astute ..." or "highly apt ..."?


I will evaluate them.


It uses less resources, so the resource retains higher availability?


HALAC and HALIC really consume very little memory. The process speed is high at the same rate. And they offer a reasonable compression rate. Therefore, they can be used at high level and different areas.


Really? Tearing down the OP because of the TLD the forum uses? That's just lame.


>>>uses the world's most notorious domain extension doesn't inspire confidence, to put it mildly.

What does this mean? Why or how is this TLD the world’s most notorious?


> The .su TLD is known for usage by cybercriminals.[4][5][6]

* https://en.wikipedia.org/wiki/.su

I would think that ICANN/whomever would have mandated its retirement / de-orbit, but a special exception was asked for:

* https://en.wikipedia.org/wiki/ISO_3166-1_alpha-2#Exceptional...


All three references are 10 years old (or only have a secondary reference that is 10 years old). More recent analyses give much more diverse set of TLDs---.ga, .tk, .us, .ru, .ml, .pw and so on [1]. In fact, at this point we should be much more concerned about .us than .su.

[1] https://www.interisle.net/PhishingLandscape2023.pdf#page=18


I'd like to test this but mum said not to download unknown .exe files from the internet.


It’s not like the usual pipe-curl-to-bash installation instructions are much better.


And if the author of your parent comment saw a random forum user ask people to curl | bash from some random .su domain I'm sure they'd have no aversion to that! Great argument!


Doesn't Windows come with a sandbox built-in nowadays?


No


It does so long as you have Pro or higher: https://learn.microsoft.com/en-us/windows/security/applicati...

It's actually decently handy. Just keep in mind no sandbox is perfect.


Honestly, if they provided 100mb source code, would you read it and then compile it? Source code alone doesn't make it secure.


Something like this doesn't require 100 MB of source code. I'd expect a few thousand LoC at most.

And I absolutely do at least a quick visual "sanity check" of the code before compiling and running newly announced software.


You can do a sanity check on exe files with VirusTotal and other tools. And if it’s just for testing, you can use a throw-away VM.


Irrelevant. Convenient or probable doesn't matter. What matters is possible vs not possible.

All it takes is one person somewhere who wants to look something over, and they heads-up the rest, and then many others do verify.

And that initial one does exist even though it's not you or me, the same way the author exists, the same way that at least once in a while for some things it is you or me.


I am happy to hear everyone has plenty of free time to check all random internet open-source projects.


Citation needed. No one said that, and so if you heard it, then you should see a therapist about your hallucinations.


It is a new year, maybe you should try to be happy and calm this year.


And yet still no one said that.


This doesn’t seem very interesting: there’s no source, the benchmarks are self reported, and the details in the forum post (??) are light at best.

It’s basically just a forum post with a self-signed .exe download…


In this regard, you can review the HALIC link if you want. https://encode.su/threads/4025-HALIC-(High-Availability-Loss...


I have and it’s exactly the same thing. No details, no source, unreproducible benchmarks and just ~75 people downloading the latest release[1]. There might be a good idea in there but just shoving it into a black hole is a shame.

Oh well.

1. https://encode.su/threads/4025-HALIC-(High-Availability-Loss...


Tip for those who are looking at the compression ratios thinking "so what?": look at the run-times. It's a minimum of 3x faster than its contemporaries.


IMHO that's still a "so what?". I see audio compression as having two primary purposes: realtime and archival, and speed is only vaguely relevant for the former.

In realtime applications, any processor within the last decade is far more than powerful enough to encode and decode in realtime. The first set of tracks in the results is 2862s long and even the slowest WAVPACK manages to encode it in >113x realtime and close to 250x realtime for decode.

For archival, high compression is important, but this codec doesn't compress better than WAVPACK either.


I am still at the beginning of the road in terms of compression rate and speed. There may be changes in subsequent versions.


> IMHO that's still a "so what?"

This is addressed in another response in this same thread regarding electricity usage.


FLAC level 5 encode times should be WAY longer than decode times. FLAC level 0 to FLAC level 5 is a huge step up and encoding should be way longer (like factor of 3). Under no circumstances should FLAC level 5 decode be faster than FLAC level 0 decode.

These are basic sanity checks.

Something is wrong in the benchmark.


> Under no circumstances should FLAC level 5 decode be faster than FLAC level 0 decode.

I don't know about FLAC, but from my knowledge of compression, this result seems sensible to me.

Smaller file = less bits to process = faster. FLAC level 5 is expected to give a smaller file than level 0, so it makes sense that decoding it will be faster. Of course, it's possible that some codecs enable more codec features at higher compression levels, which makes decoding slower, so it's not always a given, but higher compression giving faster decode doesn't seem unreasonable.


I was very surprised to this situation, but Flac Level 5 (Default mode) results gave quick results in all my tests. I've tried this hundreds of times. You can try it with any converter and see it. I think much more improvements have been made on this mode.


It seems the world made a collective yawn about flac. Hard to imagine that would change much with a new format.

I personally keep everything in flac, but Bandcamp is seemingly the only service where that is a given.


Qobuz also offers flac downloads for albums, but also linnrecords does that.


Found the Apple user.


Why would an Apple user be using flac?


I must've misread the last part of the post. My bad.


A yawn? Is there any alternative that even has 10% of the mindshare of FLAC?


Just that providers of flac are few and far between. The only reliable source of flac files are those you make yourself.


According to the author, right? Those results aren't backed up by Porcus's comments in that thread.


Maybe not x3-5, but Porcus confirms that it is still very fast:

> Though it is fast indeed! The decoding speeds are outright impressive given how FLAC is the fastest thing we ever saw ...


Yes, but thise "very fast" speeds are marginally faster than flac.


>It's a minimum of 3x faster than its contemporaries.

Ok, but what would make that useful?


> Ok, but what would make that useful?

Lower run-time means less electricity and less tying up of the CPU, making it available for other things. As a real-life example: i frequently use my Raspberry Pi 4 to convert videos from one format to another. This past week i got a Pi 5 and moved the conversion to that machine: it takes maybe 1/4th as much time. The principle with a faster converter, as opposed to faster hardware, is the same: the computer isn't tied up for as long, and not draining as much power.


Yes, but there's a threshold for effective improvements. If the more compatible and more efficient format only uses 16 seconds to encode 1 hour of audio, it's hard to imagine this making a big difference in any real use case, offline or real-time.


> ... it's hard to imagine this making a big difference in any real use case, offline or real-time.

Google once, back in 2013, made an API change to their v8 engine because it saved a small handful of CPU instructions on each call into client-defined extension functions[^1]. That change broke literally every single v8 client in the world, including thousands of lines of my own code, and i'm told that the Chrome team needed /months/ to adapt to that change.

Why would they cause such disruption for a handful of CPU instructions?

Because at "Google Scale" those few instructions add up to a tremendous amount of electricity. Saving even 1 second per request or offline job, when your service handles thousands or millions of requests/jobs per day, adds up to a considerable amount of CPU time, i.e. to a considerable amount of electricity, i.e. to considerable electricity cost savings.

[1]: https://groups.google.com/g/v8-users/c/MUq5WrC2kcE


Yes, but this must be weighed against increased storage costs, not to mention the computational cost of transcoding (and others to do with the proliferation of formats). Within the parameters of this application and taking into account the relative costs of compute and storage (in money or energy), it is not clear to me that there would be any advantage to switching.


The compression rate in audio compression is really limited. In most cases it is difficult to decrease below 50 percent.

Therefore, it is not a logical choice to increase the process rate in order to provide a few percent more compression between audio codecs. As a result, high processing times are high energy.


>Therefore, it is not a logical choice to increase the process rate in order to provide a few percent more compression between audio codecs.

Why not? And for what applications? Example: for a media streaming service, where each file is transferred many times, the bandwidth costs dominate, so it is worthwhile to spend a great deal of time on encoding to maximize efficiency. In the case of an archive, where a large amount of information is stored, accessed infrequently, storage space becomes the constraint, once again. In general, 1 marginal second of CPU time is usually cheaper than 10Mib of marginal storage (or whatever the figure works out to be). Finally, why not just write a fast FLAC encoder?


The only reason Flac is the most popular sound code is that both compression and code -solving is faster than others. The same applies to many formats such as JPEG, MP4, MP3, Rar, Zstandart. What is fast is always advantageous and is a reason for preference. Even if they are not free. As you mentioned, of course, it does not apply to every situation.

Flac is already existing. There are also a lot of workers on it. I always want to try independent and different things.


> ... it is not clear to me that there would be any advantage to switching.

Indeed, getting an accurate answer would require looking at the whole constellation for a given use case.


Nobody is going to use this at Google scale without source code! However knowing that something exists elsewhere can push somebody to re-invent it locally.


Saving a small fraction of a second millions of times over, or a handful of cycles a trillion times over, is so much more impactful than saving a dozen seconds per hour-long recording.

Also your link doesn't explain what they changed?


> Saving a small fraction of a second millions of times over, or a handful of cycles a trillion times over, is so much more impactful than saving a dozen seconds per hour-long recording.

At a large-enough scale, all savings are significant.

> Also your link doesn't explain what they changed?

They changed a function signature to use an output argument instead of a return value. i don't recall the exact signature, but it was conceptually like:

    v8::Value foo(...);
to

    void foo(..., v8::Value &result);
Why? Because their measurements showed a microscopic per-call savings for the latter construct.

PS: i wasn't aware that source code for this codec is not available. That of course puts a damper on it.


> At a large-enough scale, all savings are significant.

Yes, but not every tradeoff between compression speed and compression ratio is something that makes sense to scale in the first place.


Lower latency for real time streaming over the Internet, for one


Whilst faster decoding is always useful, most audio decoding can easily happen within the typical output buffer size (e.g. 512 samples at 44.1KHz ~ 12ms). As long as your machine can decode within that timeframe, there is no difference in latency.


Novel, but until it's open source, it will never be taken seriously.


There are things I should add to Halic and Halac. When I complete them and realize that it will really be used by someones, it will of course be open source.


This is a bad take man, "when its done" will never come having read through your comments on this you seem to really be going for some perfection in your hobby and until thats released won't release, but the issue is that rarely if ever happens, and instead of getting insights on your code and the community and yourself actually improving the landscape it's basically just a fancy exe, that no one will use actually because theres no where for it to go except in your head.


Open source it anyway. Things don't need to be perfect to be useful. The spirit of OSS has a lot of iteration, anyway. :)


What most people do is they trademark the name, that way even though someone might fork, they have to use a different name.

Something else you can do is use (A)GPL3. This means you automatically grant patent licenses, and anyone building on your work also has to release their source. You can then separately sell proprietary licenses without any of these restrictions.


Thank you very much for valuable information.


[dead]


Thank you so much.


Pleasure :), Please mention it in the original post.


[flagged]


there are things I should add to Halic and Halac. When I complete them and realize that it will really be used by someones, it will of course be open source.


until you release the source, it will never be used by someone, so I guess you are stuck in a loop and it will enver be opened.


[flagged]


My works are fully cross-platform. There is no restrictive situation. I also prepared Linux and ARM versions for HALIC, but I didn't compile new versions because it didn't get much attention. When my Linux test machine is ready(crashed), I compile the Linux version of HALAC.


> When my Linux test machine is ready(crashed)

In 2024, you don't need a separate machine to run different OSes

qemu is your friend




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: