I'm a co-founder at EffectCheck and I was working closely with Scott this weekend as he was building this. It wasn't really ready for viewing yet, but okay... :)
Please note that the top graph is currently a mixture of two sets of data. The older points were using a less sensitive and improperly calibrated HN comment model, hence why everything is drifting around near "Typical." The points from 14:47 onward are using the correct model.
For those wondering why HN was so negative from 14:47-20:47, I believe the main topics of conversation were the Bitcoin and Sega debacles. Makes sense that people were really anxious given all that news.
I'd like to know the scientific basis of EffectCheck. The site offers no explanation. Could you point out specific research papers, or link to technical info? Thanks!
"It seems that the point is to introduce their special-sauce black box, with an argument to authority about its methodology. I think the correlations you ask for are where the problems will lie, in that there is a value judgement that is being hidden. If I can put myself out on a limb here, I'd say that that measurement is going to be fundamentally unscientific."
Also, this seems like an important question, if you claim to be based in science:
"How can we falsify your claims? That is, what is a test we can perform that if it went a certain way, would show that your claims are false?"
It occurs to me that human color perception is another "subjective but scientific" field. You might want to research the experiments they performed for e.g. color matching in varying illumination levels, then come up with an experiment with a similar structure.
Specifically, if EffectCheck is accurate, then it should correlate with how 100 average people would classify e.g. a sample of 1,000 twitter messages.
So cool! This is what I had started working on for the HNSearch API Contest, but I hadn't gotten far. I'm SO glad someone did this. I feel as though this could be made into a useful and viable product, if marketed correctly and accurate enough.
Do you mean as like a dashboard for a bunch of different social news sites? What would be some killer features that you think would make it in to something people would pay for?
Eventually, yes, adding sites like Reddit, even newspapers, trending topics on twitter (and then evaluating the mood of tweets with the keyword(s).
Also, allowing them to see the popularity of certain trends (traffic, if you will). I am not familiar with the industry…maybe the product exists…but it seems incredibly useful!
Wow, I slept through this whole thing. I'll be around now though. If anyone has any requests for different ways to interact with the data, let me know and I'll work on getting it added to the site.
Anxiety and Depression definitely go together, in the same way that hostility and happiness are actually similar in terms type of emotion, in terms of level of intellect, and confidence and compassion would be at the highest end of that spectrum.
I would also change the colors to group them: anxiety and depression as blues, hostile and happy as reds, and confidence and compassion as greens. Or something like that.
My co-founders [1][2] actually did spend a decent amount of time thinking about this. The emotions are first sorted into negative on the left and positive on the right. The order is for symmetry of left/right side words:
- Anxiety is the opposite of Confidence
- Hostility is the opposite of Compassion
- Depression is the opposite of Happiness
As for the colors, each one is correlated to the typical psychological association for that color. The exception is Happiness which should be yellow but yellow doesn't render well on websites.
If you're trying to illustrate position on three spectrums, illustrate three spectrums. 6 bars on a graph implies, however vaguely, to the reader, despite your intent, that the abscissa means something. But it doesn't. It just confuses the picture. so get rid of that abscissa.
In any case, anxiety and depression are generally a package deal in clinic. So I would put them together visually.
Too bad the EffectCheck API is not open for all. Looks like a well-parameterized sentiment analysis tool.
What does this page tell us: http://effectcheck.com/pricing
It can be used for a stock analysis and dampening or amplification caused from other firehose like sources like Twitter.
It hopefully tells you that we're happy to work with you if you'd like to use sentiment analysis or lexical impact analysis in your company. :)
We are focused on B2B applications of our technology rather than the consumer/API side. However, if you have a cool idea for how you'd like to use EffectCheck, email me [1] and I'll be happy to discuss it with you.
I'll be adding the link to the site once I get home from work today, but I've set up a mailing list for anyone who might be interested in getting an email when new features are pushed out: http://eepurl.com/emtQU
It was neat when it came out years ago... Although still not sure how to make any meaningful use of this tool. I suppose it is more useful for content generators..
That's really cool! I have never seen that project before. I checked our colors compared to the results of the study:
Anxiety: #2288ff vs. #595884 (anxious)
Hostility: #ee0000 vs. #E0192C (angry) -- hostile only had 100 samples
Depression: #3300cc vs. #283152 (depressed)
Confidence: #ee00cc vs. #FF7F00 (confident)
Compassion: #11bb00 vs. #00696F (sorry) -- not sure on this word choice but compassionate wasn't available. Loving was #004E6F
Happiness: #ff8800 vs. #FF7F00 (happy)
So I would say we did well on all except maybe Confidence. However, confident shows up as almost the same color as happy, so we would have to differentiate somehow anyway.
Can it work farther back in time? interesting if there are longer term trends. Happiness particularly can be indicative about the overall satisfaction from HN.
I was really busy leading up to the contest, so I had to cut back on a lot of the features I wanted in order to hit the deadline. I will be working on adding a more visualisations and detail as quickly as I can.
I'm a co-founder at EffectCheck and I was working closely with Scott this weekend as he was building this. It wasn't really ready for viewing yet, but okay... :)
Please note that the top graph is currently a mixture of two sets of data. The older points were using a less sensitive and improperly calibrated HN comment model, hence why everything is drifting around near "Typical." The points from 14:47 onward are using the correct model.
For those wondering why HN was so negative from 14:47-20:47, I believe the main topics of conversation were the Bitcoin and Sega debacles. Makes sense that people were really anxious given all that news.