Why didn't you cherry pick an image where it covered everyone's face? The example has several people not obscured or have the dot not fully covering their face.
I'd like to understand the motivation behind your question.
Complaining about the unobscured faces would start the conversation about training data and its diminished inclusivity. But, your comment focused on the perception a person looking at the result might have, given the incomplete feature set of this software. I am having trouble imagining the sort of visitor xrd would disappoint with the imperfect image, that he should want to have retained, that wouldn't complain when noticing the gap him or herself during testing.
Usually when showing off a project it makes sense to show it working correctly. The first photo shows that the software doesn't work consistently which could give someone a bad first impression. It's not like the photo is in a section that is about its limitation, but in a section introducing the project.
If your face detection works 80% or 95% of the time, don't try and deceive me into thinking it's 100% accurate, only to leave me disappointed when I go through the effort of downloading/configuring/testing it and then find out it's worse that the solution I'm trying to replace.
Meaning the two pictures in the readme? The first is an image from Wikipedia, and the second is my face. If there is another image that is concerning, please let me know and I'll remove it and clean the history. I built this because I am uncomfortable sharing images of others without their consent (as happens all the time on social media) and also sharing my own childrens' faces, because I completely distrust the social media companies in particular.
I think the commenter meant why did you choose to show a non-perfect example of your software working? It missed a few faces in the black and white picture.