Decision trees are finite set of if-else branches, and work on finite input - much like models. I assume many of the classic arbitrary-size-input algorithms cannot be represented by a decision tree.
Adding two arbitrary-sized numbers comes to mind - how would one model it as a decision tree?
You can see that for arbitrary-sized input, this decision tree would have grow infinitely - which is contradictory to the finite nature of decision trees.
I can't think of a proof that such a computation cannot be represented by a decision tree, and it's possible that my definitions aren't quite correct. But intuitively, I think this is the way it works.
Of course, if I'm wrong, any explanation on why and how would be very welcome.