Yes I saw this, and got a little disillusioned at first, but after looking carefully this is not big data, their entire dataset fits in RAM. When your dataset can't fit in RAM - this is where the last resort comes into play. Sadly most companies, I agree, don't know when data is really big data. Most of the time it's just medium data. And I agree about the overhead costs.
128 billion edges. 1 TB of data just to list edges as pairs of integers. 154 GB after cleverly encoding edges as variable length offsets in a Hilbert curve.