It feels pretty hollow to me. If the person wants to guide themselves through some issue it's possible for them to do it in the format of the bot, but the bot isn't really assisting, and it's actually harder to follow through the thought process when responding to the bot than it would be without it.
Well... that's kind of critical, but I'm specifically interested in this stuff so I'm apt use to a critical eye.
I'm still trying to understand Rogerian therapy... I do think it has particular application because it's centered on the idea that the healing or changes the patient seeks ultimately comes from within that person. The humility of that stance is one where there might be a place for a computer, as it has an opportunity not to try to tell the person who they should be or how they should act, but only to act as a kind of emotional mirror, reflecting the patient's own statements and intentions back to them (hopefully constructively!) Most computer-assisted mental health products right now are, in comparison, very behavioralist.
There's also more to Rogerian therapy, especially empathy which is perhaps forever out of reach of a computer. PRETENDING to feel empathy is possible, of course... but I believe another principle is honesty, and pretending to feel empathy is not honest.
Anyway, it feels like this could benefit from being more reflective: have the bot repeat back to the user more often. It's okay if it can't come up with new ideas or tell the person what to do, but being able to rephrase statements or pull up meaningful past statements could be genuinely insightful without attempting to generate insight.
Well... that's kind of critical, but I'm specifically interested in this stuff so I'm apt use to a critical eye.
I'm still trying to understand Rogerian therapy... I do think it has particular application because it's centered on the idea that the healing or changes the patient seeks ultimately comes from within that person. The humility of that stance is one where there might be a place for a computer, as it has an opportunity not to try to tell the person who they should be or how they should act, but only to act as a kind of emotional mirror, reflecting the patient's own statements and intentions back to them (hopefully constructively!) Most computer-assisted mental health products right now are, in comparison, very behavioralist.
There's also more to Rogerian therapy, especially empathy which is perhaps forever out of reach of a computer. PRETENDING to feel empathy is possible, of course... but I believe another principle is honesty, and pretending to feel empathy is not honest.
Anyway, it feels like this could benefit from being more reflective: have the bot repeat back to the user more often. It's okay if it can't come up with new ideas or tell the person what to do, but being able to rephrase statements or pull up meaningful past statements could be genuinely insightful without attempting to generate insight.