The tradeoffs are both complex and evolving and are different for each project/product. In general Hadoop is more scalable and more flexible but much slower.
For example as of today RedShift can hold a maximum of 256 terrabytes of compressed data while Facebook's Hadoop cluster was over 200 Petabytes in late 2012. RedShift only supports limited query and data types and a single index while Hadoop can theoretically handle arbitrary data processing. But if these constraints are acceptable then RedShift will likely be orders of magnitude faster in most cases.
Other projects/products will have different tradeoffs but they are almost always faster as this was almost always the primary goal.
Where do you get the limit of 256TB compressed from?
Amazon Redshift enables you to start with as little
as a single 2TB XL node and scale up all the way to
a hundred 16TB 8XL nodes for 1.6PB of compressed user data.
For example as of today RedShift can hold a maximum of 256 terrabytes of compressed data while Facebook's Hadoop cluster was over 200 Petabytes in late 2012. RedShift only supports limited query and data types and a single index while Hadoop can theoretically handle arbitrary data processing. But if these constraints are acceptable then RedShift will likely be orders of magnitude faster in most cases.
Other projects/products will have different tradeoffs but they are almost always faster as this was almost always the primary goal.