This looks like a (not even) thinly veiled advertisement for their Convex product: https://www.convex.dev/.
I am interested in knowing who would make a decision to pay for something like this when there are a gamut of open source options available. Like what is the compelling reason to use something like this.
if you don't want to manage your own infrastructure, you can use our hosted product, but otherwise it's totally fine to self-host the open source binary.
"Really cool" is not compelling enough for me to decide to build a product against something like this. Here is, at a minimum, I would require to even consider:
- Comparisons against other types of stacks, like laravel livewire or phoenix liveview
- Performance metrics/stats/benchmarks whatever. You need to be faster and more robust than the other things out there, or provide some other benefit
- First Class self-hosted on premise install version without dependency on any cloud provider. Kubernetes helm charts or docker compose stacks or whatever
- I do actually like that you have a time-windowed source available license. That is something that alleviates the concern if you go under
- Jepsen or similar analysis, need to make sure whatever consistency guarantees you are advertising hold up
I'll save you more of a marketing pitch, since you seem to have enough of my pitching Convex in the article:) The bullet points at the bottom of the article should be a pretty concise list - I'd call out the reactivity / subscriptions / caching. To learn how all that magic works, check out https://stack.convex.dev/how-convex-works
Author here- and yes as a disclaimer I work at Convex.
As a caveat to that disclaimer, I pivoted my career to work here b/c I genuinely believe it moves the industry forward b/c of the default correctness guarantees, along with other things.
To this question here, some of the things that accelerate full stack development:
1. Automatically updates your UI when DB data changes, not just on a per-document one-off subscription, but based on a whole server function's execution which is deterministic and cached. And regardless if the changes were made in the current browser tab or by a different user elsewhere. Not having to remember all the places to force refresh when a user updates their profile name, e.g., makes it way faster. And not only do the Convex client react hooks fire on data changes, the data they return for a page render is all from the same logical timestamp. You don't have to worry about one part of the UI saying that payment is pending when another says it's been paid.
2. End-to-end types without codegen. When you define a server function, you define the argument validators, which immediately show up on the types for calling it from the frontend. You can iterate on your backend function and frontend types side-by-side without redefining types or codegen in the loop.
3. Automatic retries for database conflicts, as well as retries for mutations fired from the client. B/c mutations are deterministic and side-effect-free (other than transactional changes to the DB and scheduler), the client can keep retrying them and guarantee exactly-once execution. And if your mutation had DB conflicts, it's automatically retried (up to a limit with backoff). So the client can submit operations without worrying about how to recover on conflict.
There's obv. a bunch more in the article about features like text search and other things out of the box, but those maybe are more conveniences on the backend, since maybe a frontend person wouldn't have been setting up Algolia, etc.
I don't consider this strictly open source if components it depends on (I.e, the LLM) is closed source. I've seen a lot of these Fauxpen source style projects around
Not quite certain about your meaning. Could you be more specific? RAGFlow does not have its own LLM model or souce code. RAGFlow supports API calling from third-party large language model providers, as well as local deployment of these large models. RAGFlow has open-sourced these two parts of codes already.
I use Darktable quite extensively for underwater photos, but keen to try Ansel out and see if it's less friction.
An example of friction in darktable:
- I have an external strobe which means I have to put the exposure down to its lowest when shooting, otherwise everything is washed out. Darktable in newer versions has "Compensate camera exposure" on by default, which washes out all the images until I click it off. I'm sure there's a way to make this checkbox disabled by default, but why can't it accept what comes out of the camera?
- No favourites: there used to be a way to have all your panels in a favourites tab. This was great as I usually only use a handful of modules that I use. It's gone in later versions
- The "color balance" panel, not to be confused with "color balance rgb", it's not in any of the default tabs but useful for saturation adjustments. Why are some of these useful modules hidden? Shouldn't all modules be available by default. The only way you can get to it is by searching.
- White balance: there are now two modules and it warns you if you adjust one or the other: "white balance" the standard one on the "base" tab and "color calibration" tucked away on the "color" tab. Both modules are turned on by default, but if you adjust one or the other without turning one off it has a big red warning.
- One upgrade decided to reset export settings, and so my EXIF data was stripped out when exporting. It took me way too long to figure it out.
You can create a preset for the exposure module and define rules when it kicks in. For example based on the camera manufacturer, focal length, iso, etc. I use that to increase the default exposure compensation with my Fuji. I deliberately underexposes to prevent highlights from clipping. So I usually want a +1.25 exposure compensation. Likewise, you might want denoising on for high iso files.
You can organize modules into profiles and simply hide all the ones you don't use. The default profile hides some of the deprecated or display referred modules. You can change this.
White balance indeed has a deprecated variant for display referred and a scene referred one that works completely different that you typically use together with color calibration (which is where you should do most of your color correction, including color temperature changes). The reasons are mathematical and beyond me to explain properly (Aurelian does a great job on his Youtube channel). It boils down to not throwing away the baby with the bath water in terms of rounding errors accumulating and switching color model (to the one used by your display) too early in the pipeline. It might look pleasing but then it bites you when you want to tweak tone or do other things. This is the whole point of working with the scene referred modules.
Having all the legacy modules around is indeed somewhat confusing and Aurelian solves this in Ansel by hiding all the deprecated modules now. They are there for legacy files still.
I've moved my little hobby website to SvelteKit[1] from react and I am not regretting it.. yet.
The only main frustrations I have are:
- Library support is pretty lousy. You need to fudge things around to get working. I.e, with leaflet and others I have vendored in the libs and redone them.
- Incremental static refresh with svelte kit is not really there. I'd like a web hook or api callback that allows me to refresh certain static pages as I know that changes are made. Right now I'm doing a janky refresh using a file lock notifier & it's a blemish on an otherwise great framework.
- The URL routing in svelte kit is... a little ugly. It's really hard when you have an editor open with 5 `+page.svelte` files. I wish they re-introduced file name routes, rather than folder name routes. It is entirely a personal preference I know, but I have seen a lot of negative things around it.
Haven't had that refresh issue because I don't use that.
But I'm 100% with you on the routing. It's weird. One thing I had a lot of trouble was with "sub routing", like being in a route, opening a modal and having the url change so it can be linked to.
I had to implement some ugly workaround in the layout to catch hasthag # navigation.
Pretty much every library I have tried to use with Next App Router (and therefore RSC) doesn’t work with RSC. I’m sure it will change but we’ll be sticking for Next Pages for a while yet.
What's the point of using RSC if you have to mark everything as client components?
At best you're getting SSR support since client components actually run on the sever as well, but there are already cleaner solutions for SSR react components that rehydrate in the browser.
You're not using RSC if you're marking everything as a client component. The point is that you can do that and continue to use your old components while also able to use the new app router infrastructure.
It means that library maintainers are having to make changes to their libraries to get them to work on the serverside, and a lot of them aren't not really doing that very quickly.Consequently moving to RSC reduces the number of libraries that work with your React code.
It's a short term problem because most popular libraries will get updated eventually, but some won't and they'll only ever work on the client side.
In the case of Next, maintainers need to package their libraries differently to support ESM modules, or you need to configure your project to use the experimental.esmModules=false flag. Again, it's not a particularly big problem but it does reduce the size of the available ecosystem a bit.
Before, there was only client-side. Server-side is opt-in. So all the libraries you used client-side, you can still use client-side by using client-side components
Can you clarify how you would normally switch tabs and why that's difficult? If I open up a bunch of `+page.svelte` files I see them as "+page.svelte ../docs" or "+page.svelte ../blog" and so it's fairly obvious which one is which to me. Longer-term, I think we can tweak VS Code to get rid of the duplicate "+page.svelte" part as it is duplicative and unnecessary, but I don't find it unworkable at the moment. I'm wondering if it's simply that we have different tolerances for this or if there's something else going on in some cases.
Some people have mentioned liking to set the label format to short under File > Preferences > Settings and then search for Label Format.
Thank you! I've tried to address your questions below. Most of these decisions stem from having the backend written in Rust, & using GraphQL. That decoupling in the end made it a lot easier to port from react.
- I am using a rust backend for the static files and didn't want NodeJS part of the request workflow. Most pages aren't changed all that much, like maybe once every few months & so having yet another service as part of the connection flow just adds resources/delay when it's not needed. It's a lot faster/easier/cacheable to serve a static file.
- The prerender doesn't take all that long, maybe a minute or so, it's fast enough for the site as it stands, but if it got super massive it'd be a different story. I throttle how often it happens currently, so that there is a bit of time between pre-renders.
- The frontend communicates to the backend via GraphQL & the backend is not part of svelte kit, it's an entirely separate service, and so things like `page.server.ts` won't apply.
I haven't had much trouble in VS Code since it shows the directory name just after the file name. Some people have mentioned liking to set the editor tab label format to short. Go to File > Preferences > Settings then search for Label Format.
~7 per day is on the low side for me. I will normally rerun the same search with different terms multiple times, especially if I'm looking for an answer to something highly technical, like easily 5-6 different searches for the same thing. And this happens multiple times per day.
There's something perverse with the incentives here: they make more money if you have to perform more searches.
It doesn't quite sit right in the same way github actions charges per minute: The slower their runners are, the more money they make.
And both of these scenarios there is no user agency to assist with that past a certain point.
> they make more money if you have to perform more searches.
I believe the reason for the pricing is Kagi's cost structure. Accessing for example Google index is not free. Since they don't show ads or monetize the user in other ways, they need to pass the costs to customers.
Pay-for-what-you-use models are not common for consumer services, but I think it is a healthy pricing model. Better than fixed price packages, where you assume 80% of low volume users will cover the costs for the 20% heavy users
It’s tricky because there are subtle psychological effects when marginal use directly incurs marginal cost. It makes each use a decision.
When you assemble tools for cognitive work, it’s important that they have low overhead. Thinking about the financial cost of using a tool is a small context switch that slows you down. Thus a bundle of prepaid stuff increases the utility of the service beyond what you’d get with pure pay per use, even though the latter is more economically efficient.
> they make more money if you have to perform more searches.
Their model pushes people who make a large # of searches towards the unlimited plan, where incentives are always aligned. I personally find that fair because I expect to get at least as much value from my search engine as I do from my IDE. $20/month seems reasonable.
Their model also allows them to have lower expenses against the lower tier plans if users of those plans make fewer searches than their quota. That's the incentive you're looking for. As long as users are not reaching their quotas, or are on unlimited plans, the user incentive is aligned with the business' incentives. And shortly after exceeding their quotas, users will probably upgrade their plan.
The perverse incentive only exists within a few "holes" between the plans, and serves to encourage the user to upgrade. I believe that by limiting the window of the perverse incentive, it should discourage goal-seeking to fit customers within that window. The more optimal outcome month-to-month should likely be to improve customer experience and get more signups, rather than juicing the current customers for limited additional gains inside those constrained windows.
The example you highlighted as "doing it wrong" is pretty typical for an autosuggest component: Input updates trigger some request, which propagates to some list somewhere else in the dom. As it's loading, a spinner is shown, but when results are retrieved, they're updated again. Throw in apollo to the mix or some other request library, and context is used.
> You don't rely on the natural light for these photos
That's only because natural light isn't available. Modern diving lights use inaccurate color-rendering cool white LEDs heavy in the blue spectrum. To get the reds, a diver would actually need a second light source that has red LEDs. So much depends on the narrow color spectrum profile of these shitty LEDs. But incan light sources, no longer made and sold but for vintage collectables, burn tungsten, so the color spectrum is wide and very much like that of the sun, though warmer than the sun at noon, and they always produce a perfect color rendition. High CRI LEDs from Nichia and Cree are getting closer to natural light, with 93+ CRI and warmer color temperatures with pinker tints, but dive light manufacturers have not discovered these yet. So the best a diving photographer can do is to search for and find a bright halogen light source around 3200K. But instead, they shoot washed-out RAW digital images and use digital color post-processing resulting in something that can't ever be seen with human eyes in the real world because it doesn't exist. So, lies.
I am interested in knowing who would make a decision to pay for something like this when there are a gamut of open source options available. Like what is the compelling reason to use something like this.