I think that would work well. A high-quality hash is designed such that changing a single input bit will flip approximately half of the output bits. So hashing both values will make it very easy for a human to spot whether the input values were different.
> Just curious, but in your opinion, how does the composition API improve on React hooks?
I'm a fan of how Vue's composition API executes exactly once during a component lifetime (during `setup()`), whereas React hooks get set up again on every render. I find it I find Vue's approach easier to mentally model, and thus easier to write correct code (especially when doing something complicated where state changes / effect executions trigger each other). Since `setup()` is only run once, I can also do things like store non-reactive state in closure variables without wrapping those variables in hooks.
I think this puts it in perspective more; if I understand what you're saying correctly, then setup essentially gives you more fine-grained control over Vue's reactivity system, as opposed to just doing it all for you when a component is created. Which allows you to more easily pull out & consolidate reusable logic as needed.
It is kinda nice making reactive state really explicit like that; I like Svelte's approach for similar reasons.
> There's another approach to making CRDTs fast, which I haven't mentioned here at all and that is pruning.
Pruning is a key thing I appreciate about Yjs, because it's not just a performance optimization - it's a privacy feature. Users often expect that if they delete something from a document, it's gone unless they have explicitly turned on document revisioning. A CRDT without pruning leaves every accidental paste or poorly phrased remark in the permanent record.
Some lefties learn to write by either hooking their hand above or angling it below below the line being written, sometimes complemented by slanting the page toward or away from their writing arm.
I don't find that pricing objectionable on its own, but I'm wary of shopping with a vendor that advertises price as their main selling point, but buries such a potentially costly pricing detail.
Before everything shut down last year, my local theaters played a showing of a classic film each month, often Hitchcock. The few I went to had plenty of attendees.
Another local theater took the lockdown as an opportunity to do outdoor socially-distanced showings of previous blockbusters, like The Dark Knight. Plenty of attendees there, too.
We have a neat theater in the town I live in called "Brewvies" (it's a pub/theater) that on weekends often shows classic old cinema matinees for free. There's also a group that shows "classic" movies in the parks for free, too. Much fun!
Sharing from somewhere totally different: NYC has a number of theaters that show (mostly or half-half) not-new kids films, art films, and cult classics like Miyazaki or Blaxploitation films.
These theaters are typically small businesses owned by locals who have been in the business for decades. But there are also some US chains like Alamo Drafthouse that do this too.
On a slight tangent, in the suburbs of Lancaster, PA the chain theaters ran free re-runs of children's movies in the summer each Wednesday morning.
Another option in this space is Tiller (https://www.tillerhq.com/). They seem well-established, and offer some spreadsheet templates for plug-and-play solutions to some common budgeting scenarios.
If you don't care about spreadsheets specifically -- if you're just looking for scriptable access to your financials -- Lunch Money (https://lunchmoney.app) has a public API. They'll also be opening the beta of rollover budgeting any day now, which has me excited!
I interpreted it as brainstorming ways to actively sabotage success. "Solutions" that appears superficially plausible, but undermine the outcome in real world situations.
So for the dust filter, maybe selecting a material that's only rated for high temperatures in short bursts. Or one that needs frequent, labor-intensive replacements in that dangerous environment. Or test that your filter survives at high temperatures, and test the filtration efficacy, but forget to test the efficacy at high temperatures.
The GP's example doesn't involve invalid datetimes. The datetimes aren't out of bounds or invalid in any way that a check constraint would detect. They've just become factually incorrect ("bad data"), because they are derived data that wasn't updated when the derivation rules changed (i.e., regulatory changes).
If you're storing future datetimes that semantically represent wall clock time, you need to store the locale time plus the full time zone (such as America/New_York) so that your program does the right thing in response to any common regulatory changes that happen after you store the value. Storing the time zone abbreviation (e.g., EST) is inadvisable, as computers sometimes care whether you asked for EST vs EDT. Storing the time offset (e.g., -500) is incorrect, as it has the same pitfalls as storing UTC - you're precomputing the locale's expected time offset at storage time, and your data won't automatically be corrected if time regulations change.
If you're storing historical timestamps, UTC is fine because you can safely convert it to whatever time zone you want to display, knowing that changes to time zone / DST regulations tend not to affect the past.
> If you're storing future datetimes that semantically represent wall clock time, you need to store the locale time plus the full time zone (such as America/New_York) so that your program does the right thing in response to any common regulatory changes that happen after you store the value.
At this point in the process first normal form flies out the window. Trying to generalize too much can lead you down some weird garden paths. If it looks like you need a function to validate an prospective column value then you probably need to model the value as a relation corresponding to the function parameters. Then you can make it into a foreign key constraint and get on with your project.
I truly appreciate the efforts of those who attempt to expand the utility of datetime value representation to perfect a wider variety of denotational semantics. But with a relational model it may be better to delegate to simpler abstraction sufficient to the specific case.
There is a material difference to users between a single attacker having (and possibly ignoring) a data dump, and that attacker publishing that dump publically, or selling it to someone who plans to exploit its contents.
The attacker has offered to not publish if they are paid. Their word probably isn't worth much, but $1,000 seems like an affordable sum for a business to gamble on them being honest about it. And if Newsblur doesn't fix their security problems they'll be targeted again either way.
As someone who has a decade of data in Newsblur, if there's any chance that an affordable ransom will keep my data from spreading further I want Samuel to take it.
The fact that you believe paying the ransom is even an option shows that you really aren't even qualified to be discussing this topic. People with your mindset are a big part of the reason that ransomware is still going strong. The other big part is people who don't run their systems correctly in the first place.
Giving them $1000 confirms the value, allowing them to list the dump at a higher price than the usual $10-50 spammers would pay (each) for the email addresses alone