Google search doesn’t know the correct answer, but it does know what people usually search for and returns an answer to that. It’s silently adjusting Mars to Moon before returning an answer.
It is not. There was no human on Mars (yet) and that's the point of the question, proving that google's ML approach will recorrect and reinterpret the question and give a wrong answer.
Although it's interesting to note that two presumably human commenters in this thread got that wrong. Maybe the Google results are more human than we would have guessed.
it's more showing that the search engine isn't much better than bag-of-words model, and doesn't seem to "know" enough logic/reasoning to parse a sentence and determine that it's false because it says mars instead of the moon.
BTW, a big reason for this is the search quality folks at Google left the building and got replaced with growth marketers.
This narrative doesn't really ring true to me. In any case, if you're saying that this result indicates that the system is no better than a bag of words model, doesn't that indicate that humans are equally no better, considering the errors made by the two commenters in this thread?
To me it just seems like a query that is likely to trip any imperfect entity up, since a "when" question usually implies that the event in question is known to have happened.
Edit, yes I'm wrong, did a mental s/mars/moon/ without noticing.