Hacker News new | past | comments | ask | show | jobs | submit login
Audacity 3.4 (audacityteam.org)
231 points by jongalloway2 11 months ago | hide | past | favorite | 119 comments



> You can now change the duration of your audio clips without affecting their pitch non-destructively. Audacity 3.4 uses a new time stretching algorithm specifically made for music, which outperforms many of the commercially available options.

Way to bury the lede! What's this magic algorithm being spoken of, how does it work so well?

> A more detailed overview of these changes can be found in our <a>changelog</a>.

The changelog is even more terse, saying only which key combo activates it :(

A way bigger deal to me is mentioned at the bottom of the "other changes" in said changelog: Opus support!

Edit: wait, under "libraries" it says

> lib-time-and-pitch implements a time stretching algorithm originating in Staffpad. It currently is one of the highest-quality time stretching algorithms for music on the market.

Staffpad links to https://staffpad.net and seems to be some music app. The library name links to a Microsoft repository (https://github.com/audacity/audacity/tree/e4bc052201eb0e6e22...) with no readme. The source code is documentation enough :)


I was curious myself, so I downloaded Audacity and tried.

It's a granular time-stretch algorithm. By the sound of the artifacts at extreme settings it's akin to the 'Complex Pro' algorithm in Ableton Live (if you have any reference to that) and seems to be equally effective at a wide variety of sources (percussion, vocals, full records).

Is it better than most commercial offerings? Hard to say after a brief test, but it's not bad!

I suspect it's plenty good for the needs of Audacity's audience, who are unlikely to be demanding audio pros. As an audio professional I would never use Audacity, but if you need a quick, free, and (relatively) simple option to time-stretch a file, then it should fit the bill.


What's the regular go-to tool for audio pros? Adobe Audition or something else? I'm from the age when Sound Forge was considered leading (and Audition, then named CoolEdit, was its cheapo shareware competitor) so I'm probably way behind the times.


I'll clarify first that I word in record production. I don't work in audio post-production (film, video, tv, etc). While some of the tools do overlap, the industry standards for each discipline will differ, so I can't speak to the video side of things.

For my work the standard DAWs are Pro Tools, Logic Pro, Cubase, and Ableton Live. There are lots of options each with their own strengths, but any of these will do a great job. I think it comes down to the person's workflow preferences. Certain disciplines will favor different DAWs.

Other than that Izotope (programs like RX and Ozone) are doing cutting-edge work for audio restoration. There are certain companies that are competitive or exceed them in certain situations, but Izotope is the company you're most likely to see working professionals use these days for general utility.

It can be more nuanced than that, but I don't know if you're asking for a deeper explanation.


It's still apps like Sound Forge, Wavelab etc. I use audacity mostly for the loopback feature to record audio from various "sources"

Though with this new update, I thought I could stop using Sound Forge but quickly realised it doesn't have a real time pitch control on play back (think the pitch control on a turntable) so I will be keeping with Sound Forge untill then.


Do you really make enough from being an audio professional that price doesn't matter?

Whether you do or not, there are a hell of a lot of hawkers of software and "hardware" alike, who stand ready to sell their overpriced "professional" tools to people who fail to make it in the music industry (which is going to be most of them, regardless of tools or even skills). A high sticker price is a way to prove your commitment - to oneself, at least. The end market is less likely to care.


Good professional audio hardware is not overpriced. Sure, there are $15k microphones but you absolutely do not need those, a $1k one is top quality and not expensive if you use it regularly.

You are confusing it with audiophile gear.

Regarding Audacity, it's missing tons of features and it's UI is bad. Just like Gimp vs Photoshop. I did some audio production and Audacity is just not usable beyond the most basic operations, and even those are a bit of pain.


> a $1k one is top quality

$100 gets you a SM57 which probably has been on more platinum albums than any other mic.


Exactly. After a certain point, you get diminishing returns. $100 gets you probably 85-90% of the way there, which is certainly more than enough. People forget that a good mic and a good DAW do not fix a bad take. The price of the tools is less important than their proper usage, and simply recording things well.

You don't even need an SM57. I know for a fact the iPhone mic and iPhone Garageband has been one artist's success.


The SM57 is a great mic in some situations. A lot of situations even. But saying that an artist can use an SM57 or an iPhone mic on a hit record and is sufficient is missing the point.

Steven Soderbergh has shot several feature films on an iPhone and the movies were still great, but they're nowhere close to unlocking the full potential of visual expression that you'd get with more refined tools.

Artists who are serious about their craft will keep an SM57 and an iPhone mic in their color palette (Frank Ocean comes to mind), but that's all they are for serious practitioners: a creative choice.


The SM57 is a workhorse of a mic. It's great in a pinch and probably a desert island choice. It's earned it's classic status for a reason and is probably on more records and stages in the last 40 years than anything else. Great mic... to a point.

That said, the ceiling of what's possible is far higher than what an SM57 can deliver. Not to diminish it. In some instances it's perfect, but one wouldn't have to look far to find better choices, depending on the context and needs of the record.

A Toyota Corolla (don't @ me, I'm not a car person) might be a low-cost, reliable choice in a pinch, but it's far from embodying what automobiles are capable of.


As a dynamic microphone, SM57 is only usable in the studio for certain kinds of vocals/instruments - the loud ones.

You wouldn't want to record an acoustic guitar ballad on it.


Ableton Live Suite costs 750 USD? Not much for a lifetime license of something that you would use on every project. Plus there's the ability to collaborate with other professionals that depends on the Ableton project file format.

Edit: Mind you, it contains software instruments, sound packs and MIDI effects allowing you to add synthesized music to recorded music. Audacity only manipulates existing audio and you will need to bring in other software if you want to add synthesised audio which in Audacity will just be treated as extra audio tracks rather than MIDI. That will obviously make it more difficult to produce music. It would be like having to rasterise every layer instead of using a vector graphics to build an illustration.


Yes. I'm seasoned enough and do the work often enough to be discerning about my choice of tools. An amateur cook might not worry about their choice of chef's knife, but someone doing the work every day is going to make the investment in tools that enable them to express themselves at their full potential, and with minimal resistance. These tools may sometimes appear overpriced to outsiders, but for a regular user it makes a big difference and is worth the investment.

I've also been around the block long enough to know what tools are worth investing in and which to avoid, so I'm not worried about vendors with bad value add.

The "Mixdown Industrial Complex" that provokes gear lust in audio professionals is not something I worry about at this point. I'm mostly satisfied on the tools that I use and the shiny new thing doesn't interest me like it once did. The focus now is on the work.


> better than most commercial offerings

The audacity!


<3


How is granular time stretch different from OLA or WSOLA?


The time stretch algorithm is implemented in https://github.com/audacity/audacity/blob/master/libraries/l... particularly functions _time_stretch and _process_hop. It looks to me like a classic phase vocoder with vertical phase coherence (c.f. https://en.wikipedia.org/wiki/Phase_vocoder).

The basic idea is this. For a time-stretch factor of, say, 2x, the frequency spectrum of the stretched output at 2 sec should be the same as the frequency spectrum of the unstretched input at 1 sec. The naive algorithm therefore takes a short section of signal at 1s, translates it to 2s and adds it to the result. Unfortunately, this method generates all sorts of unwanted artifacts.

Imagine a pure sine wave. Now take 2 short sections of the wave from 2 random times, overlap them, and add them together. What happens? Well, it depends on the phase of each section. If the sections are out of phase, they cancel on the overlap; if in phase, they constructively interfere.

The phase vocoder is all about overlapping and adding sections together so that the phases of all the different sine waves in the sections line up. Thus, in any phase vocoder algorithm, you will see code that searches for peaks in the spectrum (see _time_stretch code). Each peak is an assumed sine wave, and corresponding peaks in adjacent frames should have their phases match.


The old "Change Tempo" effect still works much better for voice.


Is there a comparison of different algorithms and how they work anywhere? I only know of two, and I'd really like to understand them better.

The two I know of are Paulstretch (which is really only for super-duper-long stretching) and Lent 1989 ("An Efficient Method for Pitch Shifting Digital Sampled Sounds"), which can be thought of as a form of granular synthesis but isn't really what most people think of when they hear that term.


This is the PR I believe

Time stretching 2 of 6 make audio track stretching effective

https://github.com/audacity/audacity/pull/5041


> The library name links to a Microsoft repository.

You link to the official audacity and chose to (incorrectly) describe it as ”a Microsoft repository”

Why being dishonest?


If I say GitHub repository, does the sentence then make sense to you? I tend to forget that GH is Microsoft's similar to how people around here forget WhatsApp is Facebook or random brands are Nestlé, so I try to get in the habit of calling things by the parent brand to decrease the obfuscation

Maybe I should simply have said git, though. Didn't think of that until now at the end of writing this reply. No dishonesty intended, quite the opposite in fact


”a microsoft repository” implies that the audacity project is handled by microsoft, which false.

If you must state ownership you could have done so with something like ”a microsoft-hosted repository”.

Most of us have been around since before microsoft bought github.


it would be nice to at least know in what ways it "outperforms many of the commercially available options.". is this asking too much?


Tangential to this, I've noticed significant differences in audio quality when listening to audiobooks at 2x speed on VLC vs MPV. In particular MPV doesn't seem as clipped.

A bit of an open ended question, but is there anything more I could do to process the audiobook to make it sound even better at 2x?


> A bit of an open ended question, but is there anything more I could do to process the audiobook to make it sound even better at 2x?

Sounds like you could download Audacity 3.4 and make a 2x version of your audiobook files using their new time stretching algorithm.


2x! What kind of books are you reading? I have gone as far as 1.4-1.5x, but that feels about the best I can do without having to be laser focused on just listening (ie not doing chores or other minor activities as is my habit).


Staffpad and Audacity are both software under the ownership (?) of Muse Group.

Staffpad is recent acquisition which is why they are now able to share technology like this: https://mu.se/


I just feel like, for just stretching stuff and staying in pitch, any off-the-shelf FFT-based method has near perfect results within a meaningful range anyway. It has felt like a solved problem for a while? What does the "high-quality" mean here?


Most stretching algorithms I've heard have noticeable "ghosting" of transients, and tend to muddle synthesizer tones (due to discarding phase information). I am very curious to test this one.


Is this different than paul stretching with infinite duration?


Yes. The Audacity algorithm is a granular time-stretch, whereas PaulStretch is an FFT Transform for time-stretching.

Depending on your needs, you'd want to favor one over the other. Granular stretches are far less CPU intensive and have significantly lower latency than an FFT Transform. The granular algorithm will likely have better transient fidelity at small time-stretch intervals (between 0.5x - 2x speeds), whereas FFTs tend to smear the transient information.

Where FFT transforms really excel is in maintaining or transforming formants and harmonic information. Especially at extreme pitch or time settings. Granular algorithms can only stretch so far before you hear the artifacts. FFTs are far more graceful for dramatic transforms.


> a new time stretching algorithm specifically made for music, which outperforms many of the commercially available options.

I wonder how it compares to Ableton Live, warping was always a big part of Abelton.


Speaking of which, how did that "audacity isn't FOSS" fuss end up? A quick search shows tenacity is still active ( https://codeberg.org/tenacityteam/tenacity ) but I have no idea how much the projects kept interacting after the initial debate


Tenacity is distinct in more fundamental ways that just not being copyright infringement (for one, it uses a different GUI toolkit and is way slower and buggier), so its appeal at this point isn't closely related to whether people like Muse or not.


Would be nice with a fork which followed Audacity closely but without the telemetry. (Like VS Codium I suppose.)


This fork exists and it's called Audacity. Audacity doesn't have any telemetry, the PR never got merged. And any online functionality it does have (automatic update checking, crash reporting) can be turned off.


Perhaps remove the "Share Audio" ~~ad~~ button in the toolbar?


Feel free to! View > Toolbars > Share Audio toolbar disables it. Compiling from source has it disabled by default.


Auditorium :)


It's complete BS. Audacity never stopped being open source and currently has update checking and crash reporting - it does not have any telemetry.


And telemetry isn't even a bad thing. Firefox has had more extensive telemetry than Audacity for years. This idea that it's evil and nefarious to collect voluntary statistics on program usage to better target (limited) development resources is quite eye-rolling


How about calling it evil and nefarious to speak about voluntary statistics when it is not opt-in, but opt-out? There are users out there who will agree to anything. But I doubt those are the ones that really should direct the evolution of a piece of software. And almost no users will agree to telemetry when openly asked what we are talking about. And I have yet to be convinced that telemetry is actually as useful as proclaimed. I don't really see Firefox moving in a direction that enhances user experience based on gathered data. Windows, anyone?


In the context of Audacity, telemetry already would have been useful: A previous version removed the cut/copy/paste buttons, thinking that people normally used ctrl+x/c/v. However, in practice this turned out to only be half-true: while cutting in the context of cut-and-paste may have been used with shortcuts, a fairly significant number of Audacity users actually used "the scissors" to cut (as in: delete) content and came to the forum to complain because their core workflow was broken.

This sort of situation is bad for everyone: People get their workflow broken, devs need to do work to remove and later reinstate a feature, and privacy-minded people who want to complain need to share name and email to sign up on a forum for an account. In addition, it is hard to gauge what significance the forum posts have: If 100 people are complaining, are they a vocal minority of the millions of users Audacity has, or are they representative of most people?

This is especially true considering Audacity is the sort of casual "useful toolbox USB stick" program for many people - they're not going to closely follow development and updates or participate in polls or surveys, simply because it's not a part of their life they care about that much. This situation is different for something like Blender in which the tool tends to be a major part of your hobby or job if you use it at all. Although, saying that: This is a hypothesis based on my perception which cannot be verified as neither Blender nor Audacity track this data.

With telemetry (which for Audacity would have been a "do you want telemetry yes/no"-type dialog on first launch) the question "does anyone actually use the cut/copy/paste buttons?" would have been answered with "actually, yes", things would have been done differently, nobody's workflows would have been broken, and privacy-minded folks would not have needed to put emails on a forum which may or may not get hacked in the future.

In some sense, even people who disable telemetry benefit from telemetry being an option - assuming that their needs are in aggregate otherwise similar to the average user.


They did introduce a mandatory CLA which allows for using the code in non-GPL ways, even noting that this was the purpose of introducing the CLA.

https://github.com/audacity/audacity/discussions/932


Indeed:

> Audacity's source code is currently released under the GNU General Public License version 2 (GPLv2). We intend to update the license to GPLv3 to enable support for new technologies not compatible with GPLv2 (i.e. - VST3, which is compatible with GPLv3).

VST3 is dual licensed with some Steinberg license and GPLv3. The purpose of the CLA was to be able to migrate Audacity binaries to GPLv3 with VST3 support. This has happened as of Audacity 3.2.

Other uses for the CLA are to publish the thing in app stores down the road. It's not stopping Audacity being open source, unless you consider Apache software not open source.


Migrating to GPLv3 and publishing in app stores was clearly not the only purpose. The linked page says as much.

> The CLA also allows us to use the code in other products that may not be open source, which we intend to do at some point to support the continued development of Audacity.

I am well aware that you're allowed to do this with permissively licensed code, too.


I recently needed to do some sound editing, and I had a dreadful experience with Tenacity. Running on PopOS, I encountered many crashes doing simple manipulations. Even trying to scroll while playing audio ostensibly resulted in a crash. Small favor, the restore-unsaved-work functionality did save me several times.

Eventually, I held my nose and ran Audacity in a VM, and not a single crash.


Running in a VM is drastic. I'd go for bubblewrap, or a plain container. Can cut network access all right.


Indeed, there are other options, but a VM is the only one in which I feel safe that I do not screw up the configuration somehow. Docker can punch through a firewall, what other “obvious” settings exist in whatever lockdown option I pick?

Barring a VM escape exploit, I know that my private data is not getting exposed.


Ardour's pro-grade, isn't it? Maybe not as easy to start with as Audacity, but surely easier and more useful to set up than Audacity in a VM.


If you don't trust Audacity, why would you trust Ardour?


Why would you run Audacity in a VM?


FWIW it was never an issue for Linux distribution packages of Audacity. It was behind a compile-time flag so distro build scripts could have been changed to disable the telemetry. In fact the flag was disabled by default, so distro build scripts didn't even need to change and would've continued producing telemetry-free binaries.

Only binaries from Audacity themselves had the telemetry, which (as usual) is why you should never use upstream binaries.


seems like its because they tried to add basic telemetry?


Yes, they updated their privacy and data collection policy.

That’s kinda the blessing and the curse of FOSS. You absolutely can fork the repo, remove the telemetry, and republish it as a new app.

But fragmentation is confusing, requires a lot of maintenance, and really I’m not sure it was worth it. Those who are particularly conscious about the telemetry can block it with a single line in /etc/hosts.


Looks like there's also a build flag to disable all networking, which the Debian package sets: https://salsa.debian.org/multimedia-team/audacity/-/commit/1...


As does Arch Linux: https://gitlab.archlinux.org/archlinux/packaging/packages/au...

Distributions and open source maintainers looking out for their users, once again.


Sure, but it's not even enabled by default in the upstream repository. Maybe that's a result of all the fuss about it, but nonetheless..

https://github.com/audacity/audacity/blob/6c2e8a2377542d6722...


The primary network activity Audacity does is checking for updates, which you don't want in a distro-packaged binary in any case. I don't know if it's "looking out for users".


seriously, people who get outraged over telemetry should temper their anger. most of the time the telemetry isn't used to sell your data or something nefarious (its quite useless if you dont even have a login, like audacity does), its just being used to try to improve the product for you.

i write this as someone who's been involved in one too many debates about the perils of introducing telemetry to a commercial open source thing because "HN would tear us apart"


The problem is you never know what they will share. Today, they just want to track which buttons get clicked. Tomorrow, maybe some eager PM wants to upload all of my environment variables.

If it can fully run locally on my machine, I do not want it sending anything external.

Lastly, as an abused Firefox user, it seems that telemetry is only ever used to justify removing features I like.


> it seems that telemetry is only ever used to justify removing features I like

If the removed features are only features you like, then they probably aren't doing things right... The one most relevant purpose for telemetry I see for Audacity is precisely preventing this from happening, meanwhile fostering a more vigorous growth of the repo by cutting off dead branches. Audacity is over 20 years of development of features, some of which we every now and again wonder if they're still used. Not knowing, we try our best maintaining these, which slows down Development, QA and Design in delivering features that are relevant now.


The problem with this argument is that there's no reason to believe a slippery slope exists. It's just as easy to go from "no tracking" to "digital colonoscopy" as it is when your starting point is "anonymized crash reporting". Any new release of any software could start spying on you.


The developers who acquired Audacity had previously threatened to have someone deported to China and tortured over their API. https://www.theregister.com/2021/07/20/muse_group_deportatio...


I did some basic digging and this is at best a misrepresentation[1].

The original email appears to indicate they intended to contact CCP authorities. The inference I took is that they believed the developer was in China.

Later they stated that violation of law in Canada could result in revocation of visa.

So "threated to have someone deported" is maybe a stretch, "tortuted" is pretty untrue.

The github issue appears to show a pretty reasonable attempt by both parties to move forward.

1. https://github.com/Xmader/musescore-downloader/issues/5#issu...


Musegroup's head of strategy posted the following and removed it later after backlash:

"If found in violation of laws, residency may be revoked and he may be deported to his home country. This becomes even further complicated given another repo of his – 'Fuck 学习强国', which is highly critical of the Chinese government. Were he deported to China, who knows how he may be received."

Hard to take that as anything but a threat. My point stands that Musegroup has proven that they should not be trusted with any information about users.


It’s a consent violation. That always warrants anger. The purpose for violating consent is irrelevant.

Using a user’s computer to spy on them when they don’t want it to is extremely rude, in all cases, even if the surveillance data is thrown away and never used.

Developers who implement such features should be named and highlighted and should have trouble finding new jobs. It’s shady and unethical to make such software, doing so should be a black mark on one’s professional record, just like stealing. It is literally malware.

Your assumption that violating consent is ok as long as it isn’t “nefarious” is the problem.


The reason for the "non-FOSS" accusations was not related to the introduction of telemetry, but the new CLA. But they did this around the same time they tried to add (but backed out) telemetry, so people tend to confuse the two events. Which I guess is helpful for Muse.

https://github.com/audacity/audacity/discussions/932


And marking it PG-13 :-D


Feedback on release:

- Overall great.

- The tempo stretching example in the video was too subtle for me. I listened a few times and had trouble telling the difference.

- The documentation at https://manual.audacityteam.org/index.html is still for 3.3 which is a bit frustrating when trying out new features. Also the link labeled Manual that is displayed in the splash screen 404s for me.

- It took a bit too long to scan my computer for plugins and at the end I was told some plugins were deemed incompatible but not why.

Suggestions on next steps:

- I want to download songs and map the tempo to the song. That way I can easily loop over few bars when practicing an instrument.

- Today I use Ableton for this which can automatically detect the tempo of a clip, and align bar and beat markers to the song, without stretching the audio. It also does a decent job of following tempo variations within a clip. This all started working well in version 11.3.2.

I tried to use Audacity for this and these were my impressions:

- Opus support makes it easy to work with material from Youtube.

- Adding clips to tracks obscures beat and bar markings making them difficult to align with transients.

- Having to generate a metronome track is a bit clunky.

- Stem separation would be a nice addition so that I could easily mute the instrument I'm playing.


> - The tempo stretching example in the video was too subtle for me. I listened a few times and had trouble telling the difference.

that's the point of it. Being able to make 124 bpm samples cooperate with 110bpm samples without anyone ever noticing that it had happened.

> - The documentation at https://manual.audacityteam.org/index.html is still for 3.3 which is a bit frustrating when trying out new features. Also the link labeled Manual that is displayed in the splash screen 404s for me.

the manual job always takes forever to complete, but support.audacityteam.org is updated.

> - It took a bit too long to scan my computer for plugins and at the end I was told some plugins were deemed incompatible but not why.

It tries to load the relevant VSTs in a child process and if the VST crashes the child process it gets flagged as incompatible. Audio plugins are awful and nobody ever follows the spec.

All of the other things you mentioned are in various stages of being planned.


Have there been any improvements in noise reduction?

That's what's always kept me from using Audacity in the past. I like the interface and operations and everything, but cleaning up audio (removing room tone mostly) has always been the first step in my workflow, and its built-in noise reduction has just been unusably terrible compared to basic commercial tools.

Or is there a common plugin people use with it that I've never known about?


Typically a noise reduction plugin installed with the DAW by default isn’t going to be amazing. People usually install their own preferred plugins when using a DAW. If you look up noise reduction VSTs you will get lots of results for paid and free plugins compatible with Audactiy. I don’t have a recommendation but googling “reddit cheap noise reduction vst” came up with a lot of viable recommendations.


if you were on Windows you could possibly do something to get this into Audacity:

http://reaper.fm/reaplugs/

https://github.com/nbickford/REAPERDenoiser


This is a big deal! My primary use of Audacity was to create custom edits of tracks I wanted to mix, saving effort later when DJing. Aligning my edits with the beat grid always took a lot of work, so much so that I hacked up a different audio editor, trying to integrate beat detection (https://github.com/marssaxman/gum-audio). Audacity felt like it was frustratingly behind the times in this regard; well, it's five years later now, but I'm glad to see they've made it happen.


Audacity, despite its weaknesses compared to commercial tools, still excels at batch processing due to its Nyquist plugin suite. The macro tool is finicky, but you can still do things that nothing else can in a batch, like trimming leading and trailing silence and then adding an exact amount of silence to the front and end of a file. You would think functions so simple and obvious as this would already exist in Audition, RX, SpectraLayers, etc., but no.


Ffmpeg or sox can do the silence trimming for you on the command line. And it is totally reasonable to just have a script lying around that does that.


Audacity is also commercial, even if it's free and open source. That's because Muse is developing it and has a commercial interest in it, and their goals now (partly) govern the project.

For example, that also makes them vulnerable to "enshittification".


Not really. Enshittification requires high switching costs, so users stick around despite thinking "yeah I should go somewhere else". With Audacity, switching costs are low. You either can compile the thing with the relevant features disabled, or you can just download an older version of the software and stick with that to the end of time if you dislike the changes a new version brings.

Even to the suitiest of corporate suits it's clear that the enshittification funnel (first it's awesome for users, then for partners like publishers and advertisers at the cost of users, then it's awesome for making money at the cost of everyone else) simply doesn't work with an open source program.


That is a good point, but I'm not that optimistic. Maybe it dampens the effect, but for me with Musescore 4 I already think some things went in that direction.

VS Code is also not immune (I use codium, but only as a secondary editor to Neovim).


I take care of that stuff in Python, it’s pretty straightforward.


Sox?


Total OG oldschool swiss army knife for audio processing | playing .. that takes me back.

In the 1990s in a long workroom of sun workstations we rigged a rlogin sox script to play succesive parts of some spooky music as a co worker walked past each one late one night.

https://sourceforge.net/projects/sox/

https://github.com/chirlu/sox


this is made in wxWidgets by the way, a cross platform c++ GUI framework like Qt but 100% free

on the other hand inkscape is made with gtkmm(gtk), which also runs cross platform.


For some values of "works". Inkscape is close to unusable on MacOS because it goes through the X server and every interaction is really slow.


why is that, never used it on Macos myself, but gtk/gtkmm does claim to have Macos support and I would not expect some X server got involved.


The contents are rendered through gtk/cairo which not only goes through https://www.xquartz.org/ but also doesn't use GPU rendering (it was experimental 3 years ago, maybe better now). The main issue seems to be that neither Inkscape nor gtk people have low level Darwin experts or time available to invest in debugging the whole rendering stack. See for example https://gitlab.com/inkscape/inkscape/-/issues/1614 and all the other referenced issues for all the gory details.


This is very insightful, thank you. This might make me switch back to wxWidgets in fact. Desktop GUI is new to me and I have been trying electronjs(too fat), wxwidgets, gtk, flutter, maybe even Kotlin, looks like I will stick to wxWidgets.


Make sure it applies to your situation specifically though. I'm not sure if that's a generic gtk issue or gtk-as-inkscape-uses-it issue.

Also, if you want another one for your collection, I've been very happy with Avalonia for desktop UI (haven't done custom drawing in it though)


know nothing about c# :(


Audacity will switch to Qt/QML for version 4 because wxWidgets just keeps getting in the way.


surprised,any specifics? and why not gtk since wxwidgets uses gtk underneath for Linux,and gtk works for win and Mac too,easier to port it seems


"Qt has better support on MacOS(but GTK4 is doing well with MacOS nowadays), and the team uses Qt for another product so it makes sense to do audacity in Qt as well"


How does wxw and gtk roughly compare? I have liked the UI of Audacity moderately more than Inkscape, but I don't know where framework versus implementation is at play.


wxWidgets uses whatever GUI functionality is native for each OS instead of implementing its own (though there is also a wxUniversal backend which does its own controls but i'm not sure how complete it is). On Linux wxWidgets can use Gtk, Qt or Motif (though Motif doesn't seem to be tested much and while it does compile and work, there are a bunch of bugs).


I was with wxwidgets now switching to gtk, as the latter is way more widely used, wxwidgets is native looking for each OS, gtk has its own appearance, both shall do fine.


Audacities OS UI theme is really bad on MacOS. I love audacity but it really could use a modern theme like Gtk 3.


just installed via brew to MacOS, looks great to me. I wish HN let me post a screenshot.


link to imgur or something similar...


This is about halfway to a competent DAW (whereas before I feel we were at like 15%). You could really demo something pretty well with this I bet!


These are some really nice quality of life updates. I've never used time stretching, but combined with the beats and measure markers, it could be really nice.


There's a particular occasional need for easy to use time stretching of audio against video.

Every now and again there'll be a hard to otherwise source episode of something that turns up two poor vesions, one with good video but damaged | lower quality sound another with good sound | bad video .. and they each have differing frame rates and edit cuts.

To make a better version involves a bit of time stretching on the audio between marks.

I still have an eye out for the best OS tool for merging and aligning video + audio + subtitles tracks .. the smoothly integrated intersection of MKVToolNix + SubtitleEdit + Audacity.


Video copied from analog sources also often suffers from video and audio out of sync. (Due to a bunch of factors such as dropped frames etc.)


Sounds like you are doing God's work. Keep it up.


Yeah I don’t use this application much but I have occasionally created edits, condensed a song by shortening the intro or dropping a verse, and having beat alignment would be convenient


Does anyone know if there's an AI music app in which I can sing my melodies into and it then automatically adds AI drums, bass and other instruments? Be even cooler if i change my singing voice too

I loved Apple's Music Memos (works still on some old iPhones) as I could strum my guitar and sing my songs in it to automagically with one click add AI drums and bass (tempo could be changed/edited). They discontinued the app a few years ago, unfortunately.


Any new shady telemetry?


As you can see in https://github.com/audacity/audacity/pull/835, the telemetry was never added.


I'm not sure what you think that thread says, but it seems to confirm telemetry was added, then they lock the thread to end the discussion. Which is always a sign of bad faith. Did you mean to link to another where they removed all the telemetry they added?


Note that it says "closed" at the top. Pull requests which are merged say "merged", like this one: https://github.com/audacity/audacity/pull/5484

You can verify for yourself that there is no telemetry code in Audacity.


Oh good. Hopefully you all learned something? Forgive me if I remain sceptical, though. I'll stick with the fork.


We've definitely learned something. And I can forgive you being skeptical, all I ask from you is to also be skeptical of outrageous claims.


3.4 removes the functionality that clicking on a clip separator no longer joins clips, but I have long joined at separators by highlighting an area with join(s), then using this macro set to the keyboard shortcut "j":

   Join:
   SelectNone:


Is there a good reason why there isn't a hot key combo for zooming on tracks with a Mac in 2023? Command + is the standard for a lot of media editors in Mac OS.


Audacity predates this convention, it's Cmd+1,2,3,E,F for zoom in/neutral/out/selection/project at the moment. It definitely should be changed though.


Audacity is a great suite of tools. Never thought to try to use it for arranging music but apparently now I can.


When is it switching to Qt? Their current interface is very clunky.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: