unless i miss something this should not be an issue. the lexer could parse if as an IF token, and the parser could treat tags as STRING || IF ( || other keywords… )
That seems like it'd get really awkward pretty quickly. "if" isn't unique in this regard; there are about a hundred shell builtins, and all of them can be used as an argument to a command. (For example, "echo then complete command while true history" is a valid shell command consisting entirely of names of builtins, and the only keyword in it is the leading "echo".)
The problem lies with shells extensive usage of barewords. If you could eliminate the requirement for any bareword to be treated as a string then parsing shell code would then become much simpler...but also few people would want to use it because nobody wants to write the following in their interactive shell:
- a oneliner over the video that explains what you are doing would be helpful,
- and then "If you don't know what mutation testing is, you must be living under a rock! " brings people away from your repo faster than you can look the other side.
Of course you can decide to not serve those visitors. But if you want to capture as much as possible of your target audience you might want to consider which users can’t see your website - and the OP raises a number of non-obvious ways where JS adoption might affect that.
> If xz was statically linked in some way, or just used as an executa Le to compress something (like the kernel), the same problems exist and no dynamic linking would need to be involved.
even more so: all binaries dynamically linking xz can be updated by installing a fixed library version. For statically linked binaries: not so much, each individual binary would have to be relinked, good luck with that.
In exchange, each binary can be audited as a final product on its own merits, rather than leaving the final symbols-in-memory open to all kinds of dubious manipulation.
Not sure if this is what the above comment means by "atomic", but a shortcoming of Postgres' JSON support is that it will have to rewrite an entire JSON object every time a part of it gets updated, no matter how many keys the update really affected. E.g. if I update an integer in a 100MB JSON object, Postgres will write ~100MB (plus WAL, TOAST overhead, etc.), not just a few bytes. I imagine this can be a no-go for certain use cases.
It drives me batty to see people store 100MB JSON objects with a predictable internal structure as single records in an RDB rather than destructuring it and storing the fields as a single record. Like, yes, you can design it the worst possible way like that, but why? But I see it all the time.
Actually, that's the whole point of RDBs: that you can alter your data model (in most cases) just by a simple DDL+DML query. And it is with NoSQL that you have to manually download all the affected data from the DB, run the transformation with consistency checks, and upload it back. Or, alternatively, you have to write your business logic so that it can work with/transform on-demand all the different versions of data objects, which to my taste is even more of a nightmarish scenario.
The benefits of going schemaless in the early stages of development are highly suspect in my experience. The time that one might save in data modeling and migrations comes out from the other end with shittier code that’s harder to reason about.
My perspective is that using NoSQL does not save time in data modeling and migrations. Moreover, one has to pay in increased time for these activities, because
(a) in most cases, data has to follow some model in order to be processable anyway, the question is whether we formally document and enforce it at a relational storage, or leave it to external means (which we have to implement) to benefit from some specifically-optimized non-relational storage,
(b) NoSQL DBs return data (almost) as stored, one cannot rearrange results as freely as with SQL queries, not even close, thus much more careful design is required (effectively, one has to design not only schema but also the appropriate denormalization of it),
(c) migrations are manual and painful, so one had better arrive at the right design at once rather than iterate on it.
That is, of course, if one doesn't want to deal with piles of shitty code and even more shitty data.
It's not an issue with size. It's an issue with race conditions. With Mongo I can update a.b and a.c concurrently from different nodes and both writes will set the right values.
You can't do that with PG JSONB unless you lock the row for reading...
What?? That's an insane argument. That's like saying if one client sets column X to 1 and another client concurrently sets SET y = 2, one client's writes will be LOST. It shouldn't, and it doesn't. If it did, nobody would use Postgres. This issue only exists with PG's JSON impl.
What?? That’s an insane way to describe what I’m talking about. Data/transaction isolation is very complex and extremely specific to every use case, which is why database engines worth anything let you describe to them what your needs are. Hence why when one client writes to Y they specify what they think X should be if relevant and get notified to try again if the assumptions are wrong. An advantage of specifying your data and transaction model up front is that it will surface these subtle issues to you before they destructively lose important information in an unrecoverable manner.
At least for whitespace changes git should have you covered
--ignore-space-at-eol
Ignore changes in whitespace at EOL.
-b --ignore-space-change
Ignore changes in amount of whitespace. This ignores whitespace at line end, and considers all other sequences of one or more whitespace characters to be equivalent.
-w
--ignore-all-space
Ignore whitespace when comparing lines. This ignores differences even if one line has whitespace where the other line has none.
I'm repeating my comment in a sibling thread, but I think it's worth repeating (paraphrasing) -
(A) If the thing you have in mind can be inferred by looking at two commits, then you don't have to record the intention of the changes into the version control system because you can compute it later when needed.
(B) If the thing you have in mind can't be reliably inferred by looking at two commits, then you need some other way to tell the version control system about your intention.
For example, if you're just re-indenting a python source file, are you going to:
1. expect the system to automatically/heuristically realize you're just re-indenting it --> see (A)
2. explicitly tell the system that you've reindented it when making the commit --> are you sure you're bothered to do that?
3. Consistently and exclusively use a specialized IDE that records all your actions and transforms it to the corresponding intentions as recognized by the version control system?
Hey thanks for the comparison - Temporal is a powerful tool. There are definitely similarities with what Inngest is capable of. Here are a few differences to highlight:
- We’ve taken a different approach that removes the queue and worker abstraction from the end developer and instead uses an HTTP-based invocation. This enables Inngest functions to be run on any platform, including serverless.
- Inngest is powered by events which gives developers the ability to fan out work and build event-driven flows easily.
- Inngest also allows you to coordinate across events, enabling workflows to pause and wait for additional input (step.waitForEvent [1]). This is powerful for human-in-the-middle workflows (e.g. AI confirmation flows) or workflows that pause for days or weeks and conditionally run steps depending on what a user might do in your system (e.g. dynamic email drip campaigns)
- Lastly, events also provide a nice way to easy replay of your workflows across any time window.
- One last thing is that we’ve heard many times that while devs are often quite excited about Temporal, they’ve found Inngest to be much easier to learn and build with. We’ve intentionally worked to make our first SDK with the simplest primitives so their workflows just look like normal code, with minimal DSL.
Inngest has built in concurrency controls and rate limiting to prevent systems from being overloaded, allowing the user to have the same controls of a traditional worker, but just a simple config option.
Inngest was designed to be a solution that can replace traditional queuing systems and event driven systems. Originally, it was built with the idea to handle complex flows that need to be durable, e.g. healthcare workflows that have time-based follow ups and legal compliance tasks that need to be executed in a specific order depending on patient confirmation or other actions.
We've seen users build all sorts of things: automate infrastructure based on events, building AI Agents around LLMs, perform vulnerability scans across thousands of packages, building scheduling products, and e-commerce data pipelines. We're seeing new use cases each week.