I should add more context here. A data-flow analysis framework has significant overlap with an SSA-based analysis framework, except that the SSA-based one leads to simpler analyses which are more powerful.
A good comparison is CCP (conditional copy propagation) vs SCCP (sparse CCP) - the SCCP version is naturally flow-sensitive (that's a property of SSA form), so it can eliminate entire branches, leading to more dead code being eliminated.
Aren't SSA analyses generally only "kinda flow-sensitive" since values are only renamed where control flow merges, not where it splits? E.g.
int x = ...
if(x > 0) {
use(x)
else {
use(x)
}
In a DFA where you track information about each variable at each program point, you could push the constraint implied by the condition into each branch. If you do a sparse analysis on SSA form, that doesn't come naturally, right?
To handle this case some SSA implementations add a concept of "pi" nodes, which are used to artificially split variables on branches that establish some useful data flow property.
A good comparison is CCP (conditional copy propagation) vs SCCP (sparse CCP) - the SCCP version is naturally flow-sensitive (that's a property of SSA form), so it can eliminate entire branches, leading to more dead code being eliminated.