The current AI is producing more garbage to maintain, which means those who can do trash-compactor work will be in even MORE demand, and companies tend to over-hire in search of the garbage taker outers...so AI is creating MORE work, not less.
Because if TECH GIANT gets this sort of thing wrong (CCPA,GDPR,ePR, anti-trust, <add acronym here>) there is <fine_bigger_than_your_series_C> waiting for them.
Putting keys in repos should not be done, full stop. Even if GitHub forks weren’t public, their _private_ repos could one day be compromised. Instead, store keys in a shared vault, .gitignore the .env and have a .env.example with empty keys.
Don't blame the end user for doing something you don't want them to do if it is more convenient to do and works without immediate consequences. Redesign it or rethink your assumptions.
He claims further down that some make 50k per day. I've met a lot of cashed out founders. I've even met someone who could be called a billionaire. None of them were pure software engineers that made "50k per day" at any point in their career. If you amortize what becomes a 50m grant over 4 years it's about 35k per day, but how many software engineers have done that?
Your onboarding is very long, no one has time for that. Take a page from blind and show something juicy before putting me through a long ass questionnaire.
I would like to ask an obvious question to the legally inclined here. How is this any different than remixing a song (lyrics/audio)? It's not "identical", and doesn't output "verbatim" lyrics or audio. What is the distinction between <LLM> and <Singer/Remixer who outputs remixed lyrics/audio>. By a quick Google search it seems remixes violate copyright.
I'm not legally inclined, but... code and music are different? There must be different standards for when code is too similar, for when music is too similar, for when pictures are too similar, for when books are too similar.
Also, remixes almost always do contain verbatim lyrics and/or samples from the original song. LLM output isn't supposed to contain verbatim copies, but I've been told that sometimes it does. (I don't know much about LLMs and I don't think Copilot is useful. I want my 2010-era Intellisense back, when it was extremely fast and predictable.)
Not a lawyer, but that would be a fair use question, which I hear are notoriously complicated.
Colloquially, I generally expect a remix to be comprised of the original instrumentals/beat (potentially edited, but virtually nothing actually new added), potentially new lyrics, and to still be recognizable as the original.
The "still be recognizable as the original" part is a huge problem for fair use, and why I don't think remixes generally qualify. If it doesn't sound like the original then it's not a remix, but if it does sound like the original it can't be fair use.
I think the underlying issue is the resulting work, not the process that went into creating it. I think (but am in no way sure) that copying parts of songs would be fine if you did something to them so they aren't recognizable as the original.
As an example, if I take a song by the Beatles and repeatedly compress it until it's entirely compression artifacts, I would bet that I could publish that. I don't think it would matter that I started with a copyrighted work, what matters is that my finished product bears no resemblance to any other copyrighted work.
That would mean it's just a normal "is this work too similar to existing works?" standard applied to humans as well.
There is still an ancillary question of whether it's okay to train on copyrighted music, but that's really a different question than whether the works it creates infringe.