if you really need indexes, extract the fields from your json and keep them as additional columns in the table. Key value stores have always had problematic indexing on anything that wasn't the key. RDBMS does indexing really well, lean on that side for what it is good for.
w00t? A billion attributes on hundreds of billions of rows? Could you think more of an edge case? You're just coming up with shit so you can show off how l33t you are. If you have that kind of a need, just go for a NoSQL server, nobody is stopping you. This feature is a nicety to have, not a replacement of NoSQL.
You could also do it in pg pre 9.4. Just create functional indexes on the paths you wanted to index. The advantage of 9.4 is special indexes for jsonb, which may index the whole field or just some paths. So even pg 9.2 was better than SQLServer's approach...
Postgres perfectly supports indexing attributes within JSON fields, whether stored as the native JSON type, or using the hstore extension. I just created a couple of indexes on our DML audit logging archive table this afternoon, which stores old and new row state in hstore fields.