Solo. It wasn't much of an option early on in my career but now as I went full Indy two years ago, I'm not going back. I think the recent advancement in Ai coding assistance really has changed the game what an individual can do. Just wrote about the topic recently.
Observation: If your blog post starts with a clearly AI-generated abstract, why would I be motivated to read the rest of it? It's unfortunately indistinguishable from AI slop.
This is an interesting read and it’s close to my experience that a simpler prompt with less or no details but with relevant context works well most of the time. More recently, I’ve flipped the process upside down by starting with a brief specfile, that is markdown file, with context, goal and usage example I.e how the api or CLI should be used in the end. See this post for details:
In terms of optimizing code, I’m not sure if there is a silver bullet. I mean when I optimize Rust code with Windsurf & Claude, it takes multiple benchmark runs and at least a few regressions if you were to leave Claude on its own. However, if you have a good hunch and write it as an idea to explore, Claude usually nails it given the idea wasn’t too crazy. That said, more iterations usually lead to faster and better code although there is no substitute to guiding the LLM. At least not yet.
What was the crime here besides illegal possession of a firearm?
- storing explosives - mostly legal.
- self build bombs? There weren't used, so hard to make a case.
- Spreading misinformation? AFAIk, that's protected under the first amendment unless you really cause tangible damage i.e. Infowars style
One way I can think a case can be made is by portraying the guy as a danger to public safety. In Europe, that would warrant a psychiatrist making an assessment for the court. In the US where people posses more firearms than the total population, I am not so sure of that argument survives in court. And then there is the ancient constitutional right to bear arms...
To be clear, I totally support the FBI locking up an apparent maniac before he goes insane and starts using his bombs. I'm just pointing out that the US legal system doesn't seem to be well equiped for those cases.
For example, the guy who ran the FPSRusia YouTube channel from Georgia, US, was sitting on well over a quarter million worth of firearms and even owned a small tank (!). Apparently that is also legal in the US. Yet authorities only took action when somebody died on his property in what seemed to have been a shooting accident.
Let responsible people have a gun or two given these are legal, have passed background check and did proper safety training, but please keep disarming the apparent crazy ones. Nobody can make a sane case why a single person needs a dozen pipe bombs in a backpack or dozens of automatic firearms and a tank.
The short-barrel shotgun (SBS) (modified or original) without an NFA tax stamp is the holding charge. They're working on determining if was just unlicensed destructive devices (DDs) or there were specific terroristic plans. That's why.
I'm surprised there aren't more defamation lawsuits on the internet.
I've thought it could be a moneymaker for social media firms to automate defamation shakedowns, just like copyright trolls automate copyright shakedowns. Help those defamed find and the messages and send out "Pay $N thousand and this defamation lawsuit goes away now" messages. Back up some of the lawsuits with $$$ to show the threat is real.
Seriously, confiscate the ship, charge everyone on board with espionage, give maximum jail sentence, and close all maritime corridors going through NATO territory for Russia. Putin always tests for a response, and if there is none, he doubles down.
Russia violated Turkey’s airspace only once, the jet was shot down immediately, and, save to say, Putin was on the phone with Ankara to prevent an all out escalation with a NATO member that can trigger article 5 at any time for self defense after an apparent aggression. Turned out, no more airspace violations happened again.
As long as the West fails to respond with strength, Putin will never stop.
> close all maritime corridors going through NATO territory for Russia.
Probably too extreme for maritime law. OTOH a tighter inspection regime might fly. There is a precedent in ports of call that enforce their own inspection regimes.
Profile and optionally board boats entering the Skagerrak. Registry? Condition? Incident history? Hazards of declared cargo? Too many suspicious antennas?
"charge everyone on board with espionage, give maximum jail sentence" won't help: most of the crew has no choice in those operations, some might not even know what is going on. They also have dozens of ships that can do such damage, so no way to scare them by seizing or jailing one.
As I recall, Russia was making a habit of cutting across Turkey's airspace and they were officially warned before the shoot-down.
I may be misremembering.
> Russia violated Turkey’s airspace only once, the jet was shot down immediately, and, save to say, Putin was on the phone with Ankara to prevent an all out escalation with a NATO member that can trigger article 5 at any time for self defense after an apparent aggression. Turned out, no more airspace violations happened again.
I was on the same page as you for a long time, but aggressively defending your airspace also increases risk of collateral damage, leading to, for example, your military shooting down Azeri passenger jets. Or Malaysian ones. Or Iranian ones (to name one not committed by Russia).
No surprise. About a year ago, I looked at fly.io because of it's low pricing and I was wondering where they were cutting corners to still make some money. Ultimately, I found the answer in their tech docs where it was spelled out clearly that an fly instance is hardwired to one physical server and thus cannot fail over in case that server dies. Not sure if that part still is in the official documentation.
In practice, that means if a server goes down, they have to load the last snapshot from that instance from the Backup and push it on a new server, update the network path, and pray to god that not more server fail than spare capacity is available. Otherwise you have to wait for a restore until the datacenter mounted a few more boxes in the rack.
That explains quite a bit the randomness of those outage reports i.e. my app is down vs the other is fine and mine came back in 5 minutes vs the other took forever.
As a business on a budget, I think anything else i.e. a small civo cluster serves you better.
> a fly instance is hardwired to one physical server and thus cannot fail over
I'm having trouble understanding how else this is supposed to be? I understand that live migration is a thing, but even in those cases, a VM is "hardwired" to some physical server, no?
> I'm having trouble understanding how else this is supposed to be? I understand that live migration is a thing, but even in those cases, a VM is "hardwired" to some physical server, no?
They mean the storage part. If your VM's storage(state) is on one server and that server dies, you have to restore from backup. If your VM's storage is on remote shared storage mounted to that server and the server dies, your VM can be restarted elsewhere that has access to that shared storage.
In AWS land it's the difference between instance store (local to a server) and EBS (remote, attached locally).
There's a tradeoff in that shared storage will be slightly slower due to having to traverse networking, and it's harder to manage properly; but the reliability gain is massive.
> I'm having trouble understanding how else this is supposed to be? I understand that live migration is a thing, but even in those cases, a VM is "hardwired" to some physical server, no?
You can run your workload (in this case a VM) on top of a scheduler, so if one node goes down the workload is just spun up on another available node.
> Ultimately, I found the answer in their tech docs where it was spelled out clearly that an fly instance is hardwired to one physical server and thus cannot fail over in case that server dies.
Majority of EC2 instance types did not have live migration until very recently. Some probably still don't (they don't really spell out how and when it's supposed to work). It is also not free - there's a noticeable brown-out when your VM gets migrated on GCP for example.
Here's the GCP doc [1]. Other live migration products are similar.
Generally, you have worse performance while in the preparing to move state, an actual pause, then worse performance as the move finishes up. Depending on the networking setup, some inbound packets may be lost or delayed.
The status tells a story about a high-availability/clustering system failure so I think in this case the problem is rather the complexity of the HA machinery hurting the system's availability vs something like a simple VPS.
How is this different from Qodo?
Why isn’t it mentioned as a competitor?
I’ve hard time figuring out what codebuff brings to the table that hasn’t been done before other than being YC backed. I think to win in this massively competitive and fast moving market, you really have to put forward something significantly better than an expensive cobbled together script replicating OSS solutions…
I know this sounds harsh, but believe me, differentiation makes or breaks you sooner than later. Proper differentiation doesn’t have to be hard, it just needs to answer the question what you offer that I can’t get anywhere else at a similar price point. Right now, your offer is more expensive for basically something I get elsewhere better for 1/5 the price…
I’m seriously worried whether your venture will be around in one or two years from now without a more convincing value prop.
From my experience of leaning more into full end to end Ai workflows building Rust, it seems that
1) context has clearly won over RAG. There is no way back.
2) workflow is the next obvious evolution and gets you an extra mile
3) adversial GAN training seems a path forward to get from just okay generated code to something close to a home run on the first try
4) generating a style guide based on the entire code base and feeding that style guide together with the task and context into the LLM is your ticket to enterprise customers because no matter how good your stuff might be , if the generated code doesn’t fit the mold you are not part of the conversation. Conversely, if you deliver code in the same style and formatting and it actually works, well, price doesn’t matter much.
5) in terms of marketing to developers, I suggest starting listening to their pain points working with existing Ai tools. I don’t have one single of the problems you try to solve. Im sitting over a massive Rust monorepo and I’ve seen virtually every existing Ai coding assistant failing one way or another. The one I have now works miracles half the time and only fails the other half. That is already a massive improvement compared to everything else I tried over the past four years.
Point is, there is a massive need for coding assistance on complex systems and for CodeBuff to make a dime of a difference, you have to differentiate from what’s out there by starting with the challenges engineers face today.
Yes, but did you try it? I think Codebuff is by far the easiest to use and may also be more effective in your large codebase than any other comparable tool (i.e. like Cursor composer, Aider, Cline. Not sure about Qodo) because it is better at finding the appropriate files.
Re: style guide. We encourage you to write up `knowledge.md` files which are included in every prompt. You can specify styles or other guidelines to follow in your codebase. One motivating example is we wrote in instructions of how to add an endpoint (edit these three files), and that made it do the right thing when asked to create an endpoint.
The recent Blogpost about reaching V1 was quite a sobering reflection.
It always amazes when a startup takes on a really hard problem with initial excitement to change the world just to end up posting a blog post that, well, they found out it's a hard problem and they better pivot to make some money. Probably have read dozens of those posts before and, as it seems, the trend still goes strong.
Have you explored building with Bazel? What you describe as a problem is roughly what Bazel solves: Polyglot complex builds from fully vendored deps.
Just pointing this out because I had a fair share of issues with Cargo and ultimately moved to Bazel and a bit later to BuildBuddy as CI. Since then my builds are reliable, run a lot faster and even stuff like cross compilation in a cluster works flawlessly.
Obviously there is some complexity implied when moving to Bazel, but the bigger question is whether the capabilities of your current build solution keep up with the complexity of your requirements?
I'm not hugely excited about Bazel, or really any of the more complex build tools made either directly for, or in the image of, the large corporate monorepo tooling.
Right now we have decades of accumulated make stuff, with some orchestrating shell. It's good in parts, and not in others. What I would ideally like is some bespoke software that produces a big Ninja file to build literally everything. A colleague at Oxide spent at least a few hours poking at a generator that might eventually suit us, and while it's not anywhere near finished I thought it was a promising beginning I'd like us to pursue eventually!
(Not Josh, not involved with illumos, do work at Oxide)
For at least one project at Oxide we use buck2, which is conceptually in a similar place. I’d like to use it more but am still too new to wield it effectively. In general I would love to see more “here’s how to move to buck/bazel when you outgrow cargo) content.
I wrote all of those examples and contributed them back to Bazel because I've been there...
Personally, I prefer the Bazel ecosystem by a wide margin over buck2. By technology alone, buck2 is better, but as my requirements were growing, I needed a lot more mature rule sets such as rules OCI to build and publish container images without Docker and buck2 simply doesn't have the ecosystem available to support complex builds beyond a certain level. It may get there one day.
I fully agree with the ecosystem comments; I’m just a Buck fan because it’s in Rust and I like the “no built in rules” concept, but it’s true that it’s much younger and seemingly less widely used. Regardless I should spend some time with Bazel.
The parallel testing with dangling transactions isn't Diesel or Postgres specific. You can do the same with pure SQL and any relational DB that supports transactions.
For CI, BuildBuddy can spin up Docker in a remote execution host, then you write a custom util that tests if the DB container is already running and if not starts one, out that in a test and then let Bazel execute all integration tests in parallel. For some weird reasons, all tests have to be in one file per insolated remote execution host, so I created one per table.
Incremental builds that compile, tests, build and publish images usually complete in about one minute. That's thanks to the 80 Core BuildBuddy cluster with remote cache.
GitHub took about an hour back in April when the repo was half in size.
There is real gain in terms of developer velocity.
Agreed, my company tried buck2 first when evaluating a cargo replacement, but bazel is just so much more mature at this point that it ended up being the natural choice. Thanks for your examples, they helped us get started :)
VC backed companies love AGPL because it’s basically a poison pill that still makes them look OSS good. The entire blog post can be summarized as “we ticked all the boxes on paper, now pay us for looking good”. People, however, usually pay for good software instead of good virtue signaling.
I actually agree with this in practice. OSS purists might argue that AGPL and non-compete source-available licenses are fundamentally different, with the former being OSI-approved, but in reality -- at least in business -- they're used to serve the same purpose: to give the author an unfair advantage. And that's totally fine -- I'm all for unfair advantages in business. But the distinction between these licenses is blurrier than the OSI would like to admit, yet they insist it's a crystal clear line. /rant
As an open source advocate, I'm fine with source-available licenses. They've been around forever!
What ticks me off is freeloading on the goodwill generated by open source, for instance, by calling your license "Apache License Version 2 with the Commons Clause" or by insisting that "source available" is actually "open source". In other words, what you're trying to do here. That goodwill doesn't belong to you. Don't try to steal it, and don't be surprised when those who are invested in open source push back hard when you do.
> The AGPL isn't used to uphold OSS values, it's used as a defense against competition.
It's only a defense against competitors who want to use it and not give back -- just like the original GPL. If you prefer the BSD ethos, that's fine, but just say "I disagree with the copyleft philosophy", not "AGPL doesn't uphold OSS values".
I think my point was more that the author is the only one who can legally make closed-source modifications, i.e. their open core business model, giving them an unfair advantage. Also, the FUD surrounding AGPL. I guess I'm trying to point out that there's an obvious reason every open source business uses AGPL... and it's not that they want competitors to contribute back.
If they accepted contributions without a CLA, then no they can't make closed-source modifications (without some major surgery to get rid of the code not owned by them). If they wrote all the code in the first place, then that's hardly an "unfair advantage".
The only way to accept contributions and then make closed-source modifications is with a CLA; in which case it's the CLA, not the AGPL that you're really complaining about.
ETA: OK, so what if a company start out being AGPL, never accepts any contributions, and then when they become established, stop publishing new code as AGPL and takes everything proprietary? Isn't that just "open-washing", taking advantage of all the community good-will and hype around open source?
I don't think so; consider four possible scenarios:
1. They keep everything proprietary from the beginning.
1a. They become established, making decent money, serving some customer needs. Everything is still proprietary
1b. They fail. Good luck talking their VCs at that point into open-sourcing their code (or even getting it into any kind of shape that anyone could use). All their customers are stuck without any options but to stop using the software.
2. They start by making things AGPL.
2a. They become established, making decent money; eventually they take the product closed-source, doing one final release. Their customers continue to be served, but everything is now proprietary.
2b. They fail. The code is already AGPL, so nothing any of their owners or creditors can do to claw it back.
Large companies that have come to depend on their software can take their code and continue to use it and develop it on their own if they want. If there's enough of the right kind of people, a community can form around the releases and the project can live on in a pure open-source form.
2a is better than 1a, because at least there was a time when things were AGPL; the AGPL code can still be forked off and maintained if there's a big enough community.
2b is way better than 1b. In fact, 2b can hopefully make 2a more likely, since it's lower risk for people to build their infrastructure on a start-up.
Yeah, I think you're spot on with the whole CLA thing. This is why I added the badly-emphasized caveat "in business", which ime typically use CLAs. Outside of startup-land, AGPL is a fine license. I just don't think it's used honestly in startup-land, that's all. We all know the real reason OSS startups use the AGPL: to push competitors and enterprises to purchase a MAY-issue commercial license through FUD; yet we still praise them for being Open Source. Yay. But imo, in startup-land, it feels like a non-compete masquerading as Open Source, even though I know it isn't.
I'd rather OSS startups be more honest and use something like Fair Source. Bonus is that everything would eventually be OSS, unlike the typical Open Core model.
Fair source is worse than the AGPL though, sure it's "eventually open source" but what good is 2 year old code for anyone? How do you add improvements/security fixes to the codebase without the developer saying you didn't clean room the implementation?
I think you're coming at this from the wrong angle, but the 2-year delay is really only applicable to users that want to compete, or in cases where the startup goes under or in a bad direction. For most users, the freedoms under Fair Source align pretty closely to Open Source, e.g. read, fork, modify, redistribute, etc. with the non-compete caveat. Users can absolutely use the latest version -- unless they're competing, but most users aren't competing and don't plan on competing.
The difference is that all users also eventually get the proprietary features, unlike an Open Core project under AGPL + commercial terms. I do think Fair Source is a better model than Open Core, at least in most cases, because of this alone. So I guess, would you rather: 1) never have the proprietary features, or 2) have 2-year old proprietary features? I know what I'd prefer, and from a simple continuity perspective, I know which is preferred by users.
Like I said, I'm not saying AGPL is bad. I just don't like how it's used in startup-land and I think there are better, more honest, options now.
The 2-year delay applies to all of the codebase in my experience, not just the proprietary features. Users potentially have to delay security fixes for 2 years to avoid copying non-OSS code.
Fair source is a poison pill masquerading as OSS-friendly, just like BSL and friends. It's not useful in practice, and I don't think there are any examples of folks successfully using/forking BSL/fair-source code that is now OSS. That's by design.
I think you're missing my main point: the only users who should need the OSS version would be those competing, because FSS offers the same freedoms as OSS to users who aren't competing. I don't see how this is a poison-pill, or how it's masquerading as anything malicious. I think it's pretty honest i.r.t. intent.
Re: forking FSS. Check out what Oxide is doing with CockroachDB -- there's your BUSL example.
Competitors likely have the resources to figure out how to be compliant (with or without giving back), so that's not really it. And as far as I understand the startup situation, most struggle to attract paying customers at all. If you are in a situation that someone is competing against you using your own codebase, you have already gotten very, very far.
I believe the usual AGPL idea is that it generates sufficient FUD for regular customers so that they don't want to run the free (AGPL) version in production. Instead, they feel compelled to cut a separate, commercial licensing deal. A project/product is likely to follow thus model if the nominally AGPLed project has a contributor licensing agreement that involves an asymmetric copyright grant (i.e., contributions are under a very permissive license, but you only get the aggregate of all contributions under the AGPL).
If they are looking to invest in a company when they do technical due diligence and bring in a source code auditing company like Synopsis Black Duck any AGPL you're using is so problematic for them it can be a deal breaker. At a minimum it's such a major sticking point it can be one of the most significant things to hold up a transaction as you try to explain why it isn't as problematic as they think.
Having been through that process a couple of times I won't touch AGPL because it's such a PIA - your poison pill.
On the flip side, if they have or are investing in you and and you've made some aspect of your solution open source under AGPL they know any competitor using it is going to have challenges getting VC investment (see point above).
It's the free users who want open source virtue signaling. Then hopefully you convert some of them to paying customers because the software is so good.
> we have already made a compromise in not open-sourcing the whole codebase, so I thought it would be fair to pick the "freest" license of them all.
I died laughing at his comment later in the article. I still don’t know what his product is but to have such a broken misconception of free and open source, I really don’t want to know what he’s trying to sell.
In practice there is because the copyright holder will retain the exclusive rights (via CLA or else) to distribute the product under preferable and AGPL incompatible terms. This is not an “everybody is equal” situation.
Bill Gates from the 1990's called, he wants his FUD back.
To be more specific: What arguments can be used to show that the AGPL is a "poison pill" in the SaaS space, which couldn't have been used by Microsoft back in the 90's and early 2000's to show that GPL was a "poison pill" in the distributed software space?
There's pretty widespread agreement that the GPL doesn't "infect" beyond the same process, but there's no such understanding about AGPL. COSS companies are exploiting that ambiguity to say "AGPL infects everything, pay us or die, and if you disagree we may sue you and we may win". And 90% of lawyers say "don't take the chance; just pay them".
Microsoft was consistently and openly opposed to open source back in the day. Now we have startups that are simultaneously claiming to be open source while using FUD to advance an essentially non-commercial interpretation of open source. It's not the same situation.
https://neoexogenesis.com/posts/rust-windsurf-transformation...