> setting the parking brake was "harmful" to the car somehow and not to do it
And this is exactly how people end up with a parking brake that's rusted in place and useless, which they'll discover at the worst possible time. Surreal to hear that's being taught to new drivers as a good practice.
Is facil.io maintained? Last commit was 2 years ago, which is a bit concerning for something written in C that handles a bunch of security-sensitive things including a from-scratch JSON parser.
What a plot twist that the appropriate mental model for LLMs is one with a near-fatal flaw that's neatly solved by SourceGraph's product.
Edit: Not trying to be dismissive, this is actually giving you a bad mental model because it understates the capabilities of LLMs for the sake of selling their product. The way they use the LLM in their examples is completely uninteresting and adds no actual information beyond what was just fed to it. Repeating back the question would give the same usefulness! Surely they can come up with a better sales pitch for why you need "Cody" than that you can't tell what parameter names in a function call mean.
I think the arc of the story does make sense for most, though. LLMs are fantastic if you given them enough data/context with which to generate something useful for user input. In this case, that context is your codebase. It can be just about anything, though, which is why they're so useful across industries.
It makes more sense when you consider that many of the journalists who have gone into newsletters are there because they felt that the need to appeal to "average readers" and build a broad customer pool (edit: often at the insistence of their evil bosses) was restricting their genius and that they'd have a better time narrow-casting to a smaller (but more consistently monetizable) fanbase.
I served on a jury fairly recently, and it was a much more mundane thing than anything you'd see in a Hollywood film. Whole thing took 10 hours start to finish. Got picked from the pool, had the case presented by the prosecutor, heard witness testimony, the defense attorney made his case mostly by questioning the prosecution's witnesses, closing statements, deliberated very briefly (it was not a complex case) and delivered the verdict and went home. We were paid about $40 for the day plus travel mileage, and lunch was ordered off a menu and delivered to our deliberation room, presumably from the courthouse cafeteria. The food was actually pretty good.
The high-profile cases like SBF's are a tiny fraction of jury trials in the US, most trials have no significant risk of jury tampering and the jury is not sequestered, meaning that if we had needed more time for deliberation, we would have just gone home for the evening and shown up at the courthouse the next day. We did have our phones collected at the start of the day and returned at the end, which seemed like a reasonable precaution against both "independent research" and distractions/interruptions, and would have also made jury tampering somewhat more difficult if anyone was so inclined.
If you were doing that, don't you think it'd be handy to have a record of the amount to deposit? The random amount that's generated isn't even stored anywhere, it's just added directly to the total and written back to the database. What are you going to do, have another daily task that tops off the actual insurance fund to whatever the website says the total should be? Why would you treat a display value meant for the website as the source of truth about the correct value of an actual account?
The only reason to do it this way is if there's no intent for the number to correspond to reality at all.
To be fair I do think they do that on line 27. They have a table of deltas which they then materialize manually in a different table (which presumably only has one row?). And amusingly link them with foreign keys. ORMs make for some weird schemas, this looks like how I might do it MongoDB.
But to your point; if this code is written honestly, it's pretty weird that we don't see any code to post a transaction to the blockchain, or to verify that the funds are available to be allocated to the insurance funds. You would think that would happen before you wrote the new total to the database.
If you look closely, you may notice that the code that "adds to the insurance fund" doesn't _take_ that money from anywhere. It's just retrieving a number from the database, adding a value to it out of thin air and writing it back. This number was then presented to users as representing the balance of the insurance fund protecting their assets.
There was an actual insurance fund and it had a tiny fraction (5% or less) of what this number in the database claimed. The random numbers generated by this code did not represent actual contributions to said insurance fund. They were not, in fact, taking a fraction of their fee revenue and putting it into an insurance fund.
They were not doing anything even remotely reasonable, this was a flat-out lie.
Mostly that right now is a very interesting time for Grusch to claim that US government rules on document classification are inviolable, even when the stakes would be proving human contact with extraterrestrials.
Anyway, if an interstellar civilization wanted to make contact with Earth, USA couldn't stop them. And in the highly improbable case that USA found a way to prevent their population from knowing, it's only one country, they'd have to convince every other country in the world to do the same.
Let's talk about congressmen insider trading instead.
If all major governments keep it secret, the conclusion would be that the phenomenon wants it that way. The US wouldn't be in charge. But apparently something changed.