Michael Stonebraker has an interesting set of conclusions in his assessment of the MapReduce vendor market in 2015 from the "Dataflow" chapter here:
"- Just because Google thinks something is a good idea does
not mean you should adopt it.
- Disbelieve all marketing spin, and figure out what benefit
any given product actually has. This should be especially
applied to performance claims.
- The community of programmers has a love affair with “the
next shiny object”. This is likely to create “churn” in your
organization, as the “half-life” of shiny objects may be quite
short."
I reread DeWitt and Stonebraker’s (D&S) MapReduce criticism [1] and I still find it misguided 12 years later.
Map() is not equivalent to a SQL GROUP BY clause, it is equivalent to a user-defined Table Function that is used in a FROM clause. This mimics the Extract and Transform stages in a SQL ETL pipeline. The Extract is implied by the input format.
The Reduce() is very much equivalent to a user-defined Aggregate Function. D&S accurately criticize the sub-optimal materialization of intermediate data sets but they under appreciate the implicit input split and distributed sorting mechanism which dominated the Terasort benchmark at the time (a Jim Gray creation).
On-Premise commodity Hadoop clusters lost out to public Infrastructure-as-a-Service clusters. None of the five takedown categories turned out to be important. The tools have evolved and cloud-native data warehouses and ETL systems are now the best of both worlds.
"Map() is not equivalent to a SQL GROUP BY clause, it is equivalent to a user-defined Table Function that is used in a FROM clause."
No, the projection doesn't remove redundancy under most cases. There also isn't any reason you couldn't have UDF's in the GROUP BY clause.
I've written implementations of both and I think the GROUP BY is an excellent comparison for understanding Map in MapReduce Systems.
> the projection doesn't remove redundancy under most cases
Maybe I'm missing something; I don't understand why projections are part of this discussion. Maybe I should have been more precise. I was thinking about the type of Table UDF that Aster Data made popular around the time DeWitt and Stonebraker wrote their article (Jan 2008). These Table UDFs were written in languages like Java or C/C++ and generally accessed data external to the database engine. Aster Data marketing defined the functionality in terms of MapReduce.
The point I was trying to make was that the Map() part of MapReduce is equivalent to a distributed ETL pipeline. This remains one of the key use cases for Spark. The Reduce() part is no longer relevant in the new world of cheap and scalable column stores. DeWitt and Stonebraker's Teradata-like enterprise data warehouses suffered the same fate.
When you say cloud-native data warehouses do you mean things like snowflake/redshift/big-query or something else? As part of an org making the transition from spark to these I can definitely agree that these tools are better suited for practical data engineering in the medium-big-data scale (anything not Google/Facebook)
I was thinking AWS Athena (Presto) for the data warehouse and AWS Glue (Spark) for ETL. Redshift has always had the feel of a Column Store Appliance that runs side-by-side with your other IaaS resources. There is nothing particularly cloud-native about it other than the way it is provisioned and managed in the AWS web Console. Amazon QuickSight seems like an excellent alternative to Enterprise BI pivot tables like Tableau, Excel, PowerPivot, Business Objects, and Cognos. Amazon seems to be ahead of the competition (again) when it comes to ETL/DW/BI-as-a-Service, at least in terms of price-per-performance.
I don't know anything about Snowflake. SQL makes BigQuery and Hive easier to program than MapReduce/Pig but I don't think of these technologies as data warehouses.
Column Stores (compressed bitmap indexes batch updated with an ETL-like process) make exceptional data warehouses. Row oriented data warehouses all feel like anachronisms now.
I think it's a bit of a shame that the MapReduce concept got the shiny object treatment since I thought it was a nice pragmatic approach to a useful set of problems that are faced all the time and often addressed with ad-hoc programs that make a mess.
People always looked down on those that used Hadoop or somesuch for <1GB of data, but while it wasn't needed from a technology perspective it gave a structure to the project.
Now many places are back in the world of one-off scripts, and I think something of value was lost (even if it was a little ridiculous to fire up a cluster for something Excel or SQLite could handle).
> People always looked down on those that used Hadoop or somesuch for <1GB of data, but while it wasn't needed from a technology perspective it gave a structure to the project.
What 'structure'? Why is it so important that it makes it worthwhile firing up a large, complex framework? I'm beyond baffled.
The same 'structure' that makes it easy to onboard new co-workers because they've seen the same project 'structure' before in the past. In that sense, the bottleneck in an organization is getting people productive as fast as possible, even that means using a cleaver instead of a scalpel.
If all they can use is a massive cleaver (big data tools), and have no experience with scalpels (small, sharp, cheap and fast data tools), IMO your company has a serious, fundamental and systemic problem (no, let's call it failure) towards employee experience, training and knowledge. Edit: and resource management.
Seems to be a sort of inverse of the massive spreadsheets that run supply chains on accretions of spaghetti-macros.
But, a tree chipper can serve as a paper shredder, and I imagine a lot of shops in certain markets saw it as a sort of prestige asset around 5-8 years back, when a bunch of companies started hiring data scientists for no apparent rational reason.
(Not bashing data scientists or data companies. Just remembering the fad that went around Bay Area companies a while ago.)
> The community of programmers has a love affair with “the next shiny object”. This is likely to create “churn” in your organization, as the “half-life” of shiny objects may be quite short."
This is an interesting thought. A company uses shiny tech because programmers like using them for whatever reason. This attracts employees who want to use this tech too. The half-life for shiny tech is short and so these developers move on to shinier pastures. I wonder if this explains why people change jobs so often in tech? I’m sure I read the average tenure is much lower (~1.5years) compared to other industries.
If anything I would expect the causality to run the other direction, i.e. resume driven development to make sure they can get a new job and therefore a raise.
The thing that makes the redbook special in my opinion is that the editors have been able to apply their research to solve actual problems for paying customers! You don't get to see enough of that in academia.
It is up to date. Things haven't changed substantially, and they probably won't change soon either. There's nothing in the book that you'll have to unlearn or avoid applying.
Its an interesting book in that 2015 was in the middle of the noSQL hype. Since then, people have started looking for results and being more critical.
There's a gazillion technologies that we could list that are newer, and claims that any of them are the next big thing and will fundamentally change everything are, obviously, exaggerated.
You might look at the concept-oriented model [1] which is a major alternative to set-oriented approaches (including RM and MapReduce). Shortly, instead of viewing data processing as a graph of set operations, this approach treats it as a graph of operations on functions which make many data modeling/processing tasks simpler and more natural in comparision to the conventional purely set-oriented approach.
Relatedly: I’ve been trying to wrap my head around MVCC (I’d like to write my own implementation). Any recommendations for a thorough overview of the subject?
"- Just because Google thinks something is a good idea does not mean you should adopt it.
- Disbelieve all marketing spin, and figure out what benefit any given product actually has. This should be especially applied to performance claims.
- The community of programmers has a love affair with “the next shiny object”. This is likely to create “churn” in your organization, as the “half-life” of shiny objects may be quite short."