>Is there really such a thing as a bad query that can be rewritten to give the same results but faster? For me, that's already the query optimizer's job.
I can't tell if your disclaimer covers it but, yes, there are lots of bad queries that take a little bit of a re-write and run significantly faster. Generally it is someone taking a procedural vs set based approach or including things they don't need to try and help (adding an index to a temp table when it is only used once and going to be full scanned anyways). That's outside the general data typing/generally missing indexes.
I was wondering the same thing, the Galaxy Watches (before and after Tizen) have always been solid for me, granted we don't really have Apple products and I get the ecosystem advantage. Samsung does seem to iterate really, really fast...or just not sell out of the prior model.
That's what it originally meant, at least in my experience. It was when warehouses got access to commodity storage through virtualization options (Hey! I can read S3 from Redshift and it looks like a Redshift table). Similar to Postgres foreign data wrappers or polybase in sql server.
Databricks (with Delta as the underpinning) seems to have lead the charge of lakehouse meaning, your data lake+file formats/helpers+compute==data lake+datawarehouse==lakehouse.
The latter seems to be the prevailing definition today with the former aging in place.
I also isn't accurate. The Elves are generally good but don't help at all costs, they are fairly self-serving (or seem to be). Gollum is a swing b/w pity and anger. Gandalf and Aragorn tend to hide information (for the betterment of the mission it seems). Plenty of other "characters" are self-serving, Eagles, Beorn, Mr. Bombadil. Even Sauron was once good and there's hints that people believe he isn't pure evil or had some good to do.
Even with the graphic in the article, it has tablets in around 2010, there were definitely tablets well before that. They took a while to grow and take off, seems similar here.
This is a great way looking at it. The cost starts going up rapidly from daily and approaches infinity as you get to ultra-low latency realtime analytics.
There is a minimum cost though (systems, engineers, etc), so for medium data there's often very little marginal cost up until you start getting to hourly refreshes. This is not true for larger datasets though.
Is that a differentiator? I'm unfamiliar with Snowpark's actual implementation but know SQL Server introduced Python/R in engine in 2016? something like that.
Late reply, I was wondering that but also, at least in the US, they tend to have older folks and retirees so that it may skew the stats. I really don't have any data on it, just speculating.
I can't tell if your disclaimer covers it but, yes, there are lots of bad queries that take a little bit of a re-write and run significantly faster. Generally it is someone taking a procedural vs set based approach or including things they don't need to try and help (adding an index to a temp table when it is only used once and going to be full scanned anyways). That's outside the general data typing/generally missing indexes.