I think this series of posts confirms the first law of Benchmarketing - for any system one can come up with "unbiased" benchmark which confirms its superiority
Calling this "benchmarketing" sounds like you're saying the entire thing is disreputable which doesn't seem right. This blog post didn't remotely come off as shilling to me. The author does not (seem to) work for either company. They gave it a shot and shared a result. Whether or not it's a good benchmark or representative for your (anyone's) use case is debatable.
I was rather referring to original TimescaleDB article which claims what unlike some others these are real benchmarks, and I encourage all benchmark (including ours at Percona) to be taken with a pound of salt because they tend to have implicit, if not intentional biases and have rather real applicability to real world.
I selected only 11M rows for this blog because I used the dataset linked in TimescaleDB docs[0]. The dataset linked in CH docs has 1.2B rows[1]. The goal was to make comparison on dataset which both of the databases agrees upon.
https://blog.timescale.com/blog/what-is-clickhouse-how-does-...
I think this series of posts confirms the first law of Benchmarketing - for any system one can come up with "unbiased" benchmark which confirms its superiority