Hacker News new | past | comments | ask | show | jobs | submit login

I should add more context here. A data-flow analysis framework has significant overlap with an SSA-based analysis framework, except that the SSA-based one leads to simpler analyses which are more powerful.

A good comparison is CCP (conditional copy propagation) vs SCCP (sparse CCP) - the SCCP version is naturally flow-sensitive (that's a property of SSA form), so it can eliminate entire branches, leading to more dead code being eliminated.




Aren't SSA analyses generally only "kinda flow-sensitive" since values are only renamed where control flow merges, not where it splits? E.g.

  int x = ...
  if(x > 0) {
    use(x)
  else {
    use(x)
  }
In a DFA where you track information about each variable at each program point, you could push the constraint implied by the condition into each branch. If you do a sparse analysis on SSA form, that doesn't come naturally, right?


To handle this case some SSA implementations add a concept of "pi" nodes, which are used to artificially split variables on branches that establish some useful data flow property.

    x1 = ...
    if (x1 > 0) {
        x2 = pi(x1 & RANGE[1..])
        use(x2)
    } else {
        x3 = pi(x1 & RANGE[..0])
        use(x3)
    }
    x4 = phi(x2, x3) // if used
I have placed the pi nodes in the blocks, but semantically they are placed along the control edge.

Ref e.g. e-SSA in the ABCD value range inference algorithm.


That's correct; it doesn't come for free.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: