> the projection doesn't remove redundancy under most cases
Maybe I'm missing something; I don't understand why projections are part of this discussion. Maybe I should have been more precise. I was thinking about the type of Table UDF that Aster Data made popular around the time DeWitt and Stonebraker wrote their article (Jan 2008). These Table UDFs were written in languages like Java or C/C++ and generally accessed data external to the database engine. Aster Data marketing defined the functionality in terms of MapReduce.
The point I was trying to make was that the Map() part of MapReduce is equivalent to a distributed ETL pipeline. This remains one of the key use cases for Spark. The Reduce() part is no longer relevant in the new world of cheap and scalable column stores. DeWitt and Stonebraker's Teradata-like enterprise data warehouses suffered the same fate.
Maybe I'm missing something; I don't understand why projections are part of this discussion. Maybe I should have been more precise. I was thinking about the type of Table UDF that Aster Data made popular around the time DeWitt and Stonebraker wrote their article (Jan 2008). These Table UDFs were written in languages like Java or C/C++ and generally accessed data external to the database engine. Aster Data marketing defined the functionality in terms of MapReduce.
The point I was trying to make was that the Map() part of MapReduce is equivalent to a distributed ETL pipeline. This remains one of the key use cases for Spark. The Reduce() part is no longer relevant in the new world of cheap and scalable column stores. DeWitt and Stonebraker's Teradata-like enterprise data warehouses suffered the same fate.