Examples of monotonic computation that fit are traditional database operators: filters, projections, joins. Interestingly enough, there is some coordination needed for joins, i.e. the items that match need to be in the same place at some point. But, that's not the type of coordination that the paper is pushing back against. It's still possible to distribute the computation without worrying about ordering.
Examples on non-monotonic computation (i.e. that don't fit) include aggregations, in particular those that include time series with windows. The answer depends on what is in the window. So, if you take an average and only include a subset of the data in a window, the output will change when you add more data in the window. At least, that's how I interpret the monotonic requirement.
the authors postulate that not all computations (programs) should require global coordination.
if you compartmentalize data in such a way, that most of your computations are done locally on a node (with the data that can fit a single node) - then you don't need global coordination.
for example, if you have two nodes: one in the US and one in the EU, and all customer data is stored in the corresponding geo region => you dont need global coordination. EU users and all their related computations can be perfectly done without coordinating with US node.
US node => serves US customers and all compute done locally. same logic applies to EU node, and Asia node.
for a subset of computations that require global consensus you can either broadcast shared data to regions, and do some sort of scatter/gather, map/reduce
but partitioning by geo is one of the many possible ways, there could be many other ways to partition your global working set in a way to minimize global coordination.
I call this "many small worlds" approach, where each small world can fit a single node and does not need global coordination.
Contrast it to "a single large world" - where you have a planet-scale globally distributed, replicated system with a consensus. You spend a lot of resources on duplication, and coordination, and end up wasting the 80% capabilities of this system by storing the data that could be perfectly served from a single cheap EC2 host.
even for computation that require global coordination, let's say you need to calc global AVG():
you can rewrite it in scatter/gather algorithm by splitting it into 1) TotalSum and 2) TotalCount
scatter this across node, compute locally, gather results (2 fields per node) and compute global average.
a lot of global consensus algos can be rewritten in scatter/gather manner to parallelize and offload work to nodes without global synchronization
Examples on non-monotonic computation (i.e. that don't fit) include aggregations, in particular those that include time series with windows. The answer depends on what is in the window. So, if you take an average and only include a subset of the data in a window, the output will change when you add more data in the window. At least, that's how I interpret the monotonic requirement.