My measurement was imprecise and that probably throws things off:
python -m timeit '64135289477071580278790190170577389084825014742943447208116859632024532344630238623598752668347708737661925585694639798853367*33372027594978156556226010605355114227940760344767554666784520987023841729210037080257448673296881877565718986258036932062711 == 2140324650240744961264423072839333563008614715144755017797754920881418023447140136643345519095804679610992851872470914587687396261921557363047454770520805119056493106687691590019759405693457452230589325976697471681738069364894699871578494975937497937'
500000 loops, best of 5: 496 nsec per loop
So roughly 2 verifications per core-millisecond, ~170 quadrillion ratio?
I maintain an open-source project [1] that uses graphs to model data. I wanted to make my project as accessible as possible, so Graphviz was perfect since it's dead-simple to install and use on all major OS platforms.
I think what makes this hard for folks is tracking what the expected form of data is at each step of its lifecycle, especially considering people working with new and unfamiliar codebases or splitting focus on multiple projects.
There are some frameworks that try using types to solve the problem. Alternatively, the developers could throw in a comment that looks something like:
// client == submits raw data ==> web_server == inserts raw data (param. sql stmt) ==> db_server ==> returns query with raw data ==> our_function == returns html-escaped data ==> client
Depending on what your job role is, this last weekend probably sucked. But IMO it's also pretty typical that a few of these sorts of events happen every year.
Not a ton of benefit here, because drawing attention without having the patch fully ready means more eyeballs looking. More eyeballs means someone inevitably finds and exploits/publishes before it can be mitigated.
I only imported 10 dependencies, but those 10 dependencies each had 10 dependencies which each had 10 dependencies which each had 10 dependencies and all of the sudden I'm at 10k dependencies again...
The transitive dependency chain should be part of your evaluation of a library. Frameworks are special cases, for sure. But if you’re adding a dependency and it adds 10,000 new entries to your lock file, that should be taken into consideration during your library selection process. Likewise, when upgrading dependencies, you should watch how much of the world gets pulled in.
That said, I don’t know what the answer is for JS. There are too many dependency cycles that make auditing upgrades intractable. If you’re not constantly upgrading libraries, you’ll be unable to add a new one because it probably relies on a newer version of something you already had. In most other ecosystems, upgrading can be a more deliberate activity. I tried to audit NPM module upgrades and it’s next to impossible if using something like Create React App. The last time I tried Create React App, yarn-audit reported ~5,000 security issues on a freshly created app. Many were duplicates due with the same module being depended on multiple times, but it’s still problematic.
I took a networks class during college, and there was a homework question from the textbook about a scenario like this. It had you compare transferring a large amount of data over the Internet versus loading it onto a disk and driving a physical distance to load it onto the other computer. The answer depended on the available bandwidth against the distance to drive.
You're welcome! It's currently aware of Condition (and other) constraints:
> Note: actions are always grouped by similar principals, resources, conditions, etc. If two statements have different conditions, say, they are processed separately.
That is, if a rule has constraints, it's grouped with other rules that have the exact same constraints. As of today, it doesn't do anything to smartly work across the various groups. Deny statements work the same way: they're grouped with other Deny statements.