Yes - I primarily used spaCy [0] to handle all the NLP related tasks (tokenization, etc) as well as empath [1]. The Stanford Parser was only used for constituency parsing as spaCy doesn't handle that. The different feature sets were then derived from all that data.