This license only applies to the compiled/packaged extension delivered via the extension marketplace. The source code to this VSCode extension is available on Github and is under the permissive MIT License (as is the .net runtime).
If you are looking for evidence that .net core is maliciously pretending to be open source to move to a language reliant on a JVM, I don't believe this would qualify.
> But as a layman to cryptography I don't get what is the significant difference between this finding and Levin's. Is there anyone who can explain this to someone with an undergraduate level of mathematical backgrounds?
Here's a massive simplification. Let's say you tell me "Here's a conjecture: since multiplying 2x3 is hard, multiplying any combination of two numbers from 1-5 is hard. I can't prove that it is hard, but if all of them are hard, then my cryptography works!"
Levin's complete function approach : "Here's a function that's at least as hard as any given combination of two numbers: (1x2) + (1x2) + (1x3) + (1x4) + (1x5) + (2x3) + (2x4)... [and so on]. If that turns out to be easy, than your conjecture is wrong!"
The article states that there is a novel and surprising connection: "Proving the difficulty of multiplying any random combination of numbers under 5 is equivalent to solving this seemingly unrelated (and as yet unsolved) problem in geometry."
The two approaches are pretty different and have very different ramifications. While in a lot of cases, the approach of "here's an equivalent problem" doesn't actually help, in some cases it turns out that the equivalent problem is easier to solve. Or that the connection between the two opens up completely new approaches to solutions - or even applications of the math involved. Sometimes just the proof itself causes new connections! Often it takes considerable time before the impacts show themselves.
It would be hand-waving if we didn't actually do it :)
Calvin uses a locking scheme with deadlock avoidance. PVW uses a MVCC scheme with no locking (and therefore no deadlock). Fauna uses an OCC scheme with no deadlock and deterministic validation.
Not trying to say you can't do it - I'm sure I'm just not informed enough.
However, I don't see how MVCC could fix a multi-worker issue that would cause category (1) aborts in your scenario.
With MVCC, if another worker concurrently modifies a record ( say 'Y'), I continue to read the pre-modified value once I've read it. So my value for Y may be incorrect between the time I check it's greater than 0, and the time I set X to 42. My constraint check was invalid.
At this point you either have a transaction that can't commit despite your guarantee it can (because my conditional passed!) , or an 'eventual consistency' model where the consistency will be reconciled outside the scope of original change (and in this model you wouldn't use 2PC anyway).
The assumption is that the data partitions are disjoint. Each worker controls its own data and therefore controls what value the other workers see. So the worker is responsible for making sure the other workers read the correct version.
My assumption reading the article was that each transaction is assigned a unique id. Then a worker could ask another worker for "y for/as of transaction 42".
Exactly - In my opinion there is no "identity theft". There is criminal fraud, which the banks are a victim of. However, instead of dealing with that fraud they just pass the costs on to an unrelated individual and then shrug and say "you deal with it".
Google does something much like this - but without regulation or clear appeal process.
We are a small shop that has 4 repositories and 36 users (over half the company). About 10 of those users actually contribute code, the others are monitoring issues, pulling code just to run tests or create distributions, or bots.
If we accidentally hit the upgrade button (we won't), our cost would go from 300/year to 3,648/year. Since only a small number of projects are on github - we use TFS for our main project and github for tools - its just a non-starter.
Heck, 5 "bot accounts" is $540/year to support CI builds and slack notifications. Yikes! More than we pay now.
It seems like the only shop that would save money would be the little in-house development departments with 5 people and tons of projects. However, even there they would probably forego using issues tracking in github because of the extra user cost.
I would be very interested to see real stats on how many orgs actually "upgrade" to this new more expensive pricing model vs how many stay with the more sane model. The real losers are orgs that can't sign up under the old model. The real winners will be the github alternatives (gitlab, bitbucket, etc) that can use this as an opportunity to grow user base.
Hopefully, GitHub can adopt a similar "non-human user" account concept as Slack has. They are free to add and don't log into the normal applications.
Of course, do you really need full accounts for those purposes? Their APIs are really extensive and should give you access to set up things like CI and notifications.
Here's my problem with this article: Return to sanity by adopting what?
I work on several small-ish projects, and due to the leads coming and going, there are a smattering of source control solutions. On a weekly basis, I use SVN, TFS, and Git about equally.
However, the workflow supported by Git is by far the best for me: I can commit locally as much as I want, rebase the commits or just merge them with work other people are doing, bisect if I broke something, and even branch locally to experiment or when I get interrupted by a question or another task.
Neither TFS nor SVN support this at all. With both of them, I can't really check in until I'm completely done and sure I won't break the build or tests. I end up zipping my directory or fighting with patches/shelvesets that don't do what I want.
Now, does the way I want to work require a DVCS? I don't know - perhaps it doesn't in a theoretical sense. However, DVCS is the only one that actually supports that now.
So sure, we all push to the same repository and it could be a centralized system. But what would actually work? What can I switch to? I'm not abandoning Git for TFS or SVN, that's for sure. Nor Perforce which was also painful.
Yes, you convinced me I don't need the "D" in DVCS. So make a CVS that supports local branching, commiting, sane merging and diffing and show it to me! But complaining that I'm not using one of the features of my DVCS has no bearing on if I should abandon it or not.
No. Firemen have incentive to prevent fires. Home fires don't increase their revenue stream and in fact put them at risk of injury or death.
Police are not mandated to prevent crimes, and have no external incentives to do so (I state external because some officers want to reduce crime due to morals/ethical beliefs). In fact, in can be argued by looking at civil forfeiture laws and federal programs paying officers directly for certain arrests, that preventing crime is actually against their best interests both organizationally and individually.
However, handing out a tool that always indicates suspicious activity and allows them to invade privacy.. well, that fits in exactly with the behavior that laws and rules surrounding them have encouraged.
I agree that the SRP is certainly a subjective rather than objective principle, and possibly general guidance that can and should be broken in specific circumstance. This article points that out but rather than try to apply prescriptive guidance around making it more objective in specific scenarios, the author seems to believe that its subjective nature is too flawed to fix.
What's the issue?
> A good, valid principle must be clear and fundamentally objective. It should be a way of sorting solutions, or to compare solutions.
Okay, I'm listening. What is your alternative?
> It's not a clear-cut principle: it does not tell you how to code. It is purposefully not prescriptive. Coding is hard and principles should not take the place of thinking.
And.... we're right back to subjective and general again. Set up a straw man only to knock it down with an identical straw man.
Of course, reducing coupling and raising cohesion makes the class responsible for less and less... So are we back at the authors interpretation of the SRP? Seems like it to me.
A technical interview whiteboard code or data structure gotchas or mt. fuji questions is a bad interview. It sucks that most technical interviews are bad, but that means we should fix them not remove them.
Our team "technical interviews" by having a technology discussion with the applicant. One of the first lines of question is figuring out what they are most familiar with so we can discuss that particular thing, area, library, or whatever. If a person can't discuss what they are most familiar with in the high-pressure interview, I'm not sure they can discuss something they just learned about in a team design meeting either. It's also a great way for the candidate to figure out if he wants to work with us - something that is just as important as the reverse.
Quit making technical interviews a quiz show. Quit checking off boxes on your form. Quit with BAD technical interviews. But don't remove them entirely - that's just as dumb.
If you are looking for evidence that .net core is maliciously pretending to be open source to move to a language reliant on a JVM, I don't believe this would qualify.