It depends on workload. This tool generates recommended config for that specific machine workload. App Nodes can have completely different recommendations vs Database Nodes. It will be completely different for Workstation.
Sure, but the kernel could just do the same. Of course the kernel is already too big. Is BPF the right level to make it more modular? Just thinking, I don't think I have the answer.
Our initial approach is to do full table re-syncs periodically. Our next step is to enable incremental data syncing by supporting insert/update/delete according to the Iceberg spec. In short, it'd produce "diff" Parquet files and "stitch" them using metadata (enabling time travel queries, schema evolution, etc.)
If there was only one alert criteria, that'd be simple. Our alerts can be configured for any data filters (eg. only matching logs with column `level='error'`); we would have to create a unique MV for each alerts' filter condition.
You could have an alert ID be part of the MV primary key?
A MV is really more like a trigger, which translates an insert into table A into an insert in table B, evaluating, filtering, and grouping each batch of A inserts to determine what rows to insert into B. Those inserts can be grouped by an alert ID in order to segregate the state columns by alert. To me this sounds like exactly what you're doing using manual inserts?
That said, I while MVs are super powerful and convenient, they're a convenience more than a core function. If you have an ingest flow expressed as Go code (as opposed to, say, Vector or Kafka Connect), then you're basically just "lifting" the convenience of MVs into Go code. You don't get the benefit of MV's ability to efficiently evaluate against a batch of inserts (which gives you access to joins and dictionaries and so on), but it's functionally very similar.
I try to use most of the tools (linux) as standard as possible without customisation including shortcut keys. The problem is, once you are in remote sever/dev ops boxes, you can't have fancy tools or fancy shortcuts. It's better to train your mind to standard tools as much as possible.
Just because we don't have access to great tools when working in remote server doesn't mean we shouldn't use them locally.
I use Vim with lots of plugins on my personal projects, I use IntelliJ at work. But if I need to ssh and vi, it's ok, I know how to it efficiently.
With Fleet or VSCode you can easily use your dev environment with your tools, plugins, shortcode to work on remote codebase via SSH.
I agree, and fzf is a good example - on my local box it speeds up my reverse search, whereas when I'm on a remote server I use the same Ctrl+R I used for decades, and the final result is similar so no additional cognitive load.
That only really applies at a small scale. At some point you either stop logging into them, or do it just to run some automation. I can't remember the last time I did something non trivial in a remote terminal now. (Apart from my home server which has everything I want)
This completely depends on the system architecture of your company and your job role, scale has nothing to do with it. There are so many giant Unix shops out there with people herding them day in, day out.
Agreed. With how easy it is to copy over a standalone binary for things like rg and fd, I find it hard to justify taking the time to learn the much more clunky standard tools.
I don't need to access servers often though. I'm sure for others the situation is different.
Can anyone comment why these are superior to ClickHouse? I really like the ClickBench which compares the various products performance (and open source).
I have no affiliation with ClickHouse, but in my experience, everything I have tried (regular relational DB (Postgres), InfluxDB, TimescaleDB is significantly inferior to it. However, I wouldn't bother with it unless you have enough scale to justify it (imo that is > low billions of rows).
And macOS finder has one of the pathetic designs in similar to Apple simplicity but its supposed to be used for other way. Taking inspiration from that design is shooting yourself and half baking it is like making sure you screw in both ways
Zoom official change log doesn't have any information about this and even their support docs still not updated, but it's available for Linux since version v6.0.x
reply