You should make the point clearer on the sophomoric philosopher's point of view that you refer to. I've read the DFW essay in question, and found it amusing though I'm out of my depth as a philosopher. With this in mind, I asked bard for clarification and it said:
"The consensus among eminent philosophers with respect to David Foster Wallace's "Consider the Lobster" is that it is a thought-provoking and challenging essay that raises important questions about the ethics of eating animals. The essay has been praised for its intelligence, wit, and originality, and it has been cited as an important contribution to the debate about animal rights.
Some philosophers have argued that Wallace's essay provides a strong case for vegetarianism or veganism. They argue that lobsters are sentient creatures who can experience pain and suffering, and that it is therefore wrong to kill them for food. Others have argued that Wallace's essay does not go far enough, and that we should also consider the ethics of eating other animals, such as cows, pigs, and chickens.
Still others have argued that Wallace's essay is not primarily about animal rights, but is instead about the nature of consciousness and the meaning of life. They argue that Wallace's essay raises important questions about what it means to be a conscious being, and about the relationship between humans and other animals.
Whatever one's views on the ethics of eating animals, "Consider the Lobster" is a powerful and thought-provoking essay that deserves to be read and discussed.
Here are some specific examples of how eminent philosophers have responded to "Consider the Lobster":
Martha Nussbaum, a philosopher at the University of Chicago, has praised the essay for its "beautiful and moving" exploration of the "moral status of nonhuman animals."
Peter Singer, a philosopher at Princeton University, has called the essay a "powerful and important" contribution to the debate about animal rights.
Cora Diamond, a philosopher at the University of Virginia, has argued that the essay "challenges us to think more deeply about the nature of consciousness and the meaning of life."
Overall, the consensus among eminent philosophers is that "Consider the Lobster" is a valuable and important work that deserves to be read and discussed."
So if the AI thinks eminent philosopher's have consensus on DFW's lobster points, who is it that you are referring to with respect to your phenomenological comment without any empirical evidence? I can't say that bard is perfect empirically but my speculative intuition is that the quoted references are correct without taking time to verify them.
your post would be more convincing if it didn't entirely rely on the output of an AI model, something well known for being unable to distinguish correctness from sounding correct, and for hallucinating fake information and even fake sources
thus, the logical end of the sentence "So if the AI thinks eminent philosopher's have consensus on DFW's lobster points..." is "then we can make no conclusions solely based on that AI output, because of the aforementioned reasons"
I find myself frequently at an inflection point between the tyranny of time and the need for correctness in communication. The role technology plays and the toll it takes in return is particularly troubling as humans have been overwhelmed by the broad and deep flood of information coming at them in increasingly asymptotic time-scales since the publication of Infinite Jest nearly 3 decades ago, which is roughly where the time scales in Peter Pirolli's research start. See the slides linked below.
That's the phenomenological n-of-1 take. For a more empirical treatement, see Peter Piroli's slide deck here:
The self-reflective point that you make is good, but it hides the fact that what we are missing is a set of meta tools around emergent AI that allow us to build scalable collaborations of human-computer sensemaking teams - the kind Pirolli implies.
At this point, even the ability to do that batch mode overnight meaning-making, sense-making, validation and verification feedback loops would be an improvement from where we are since our accusations of scientism vs. anthropo-robotic models of mental disorders could be rendered mute by agents that could actually resolve complex models of evolutionary epistemology.
I feel like we all want the same thing: Truth. However, some of us are more tolerant of what we have now: prototypes, proxies, and n-of-1, back of the envelope verification, validation, fact-checking, if you will.
Human beings can also not distinguish between fantasy and reality (on average), it is just that we're all used to our own fantasies and can work around them. In this case, the poster did not understand why something was the case and has indicated they are out of their depth: The AI response is meant to serve as a bridge to more useful discussion. It provides a few reference points of view which gives the OP something to respond to, other than just literally every aspect of why they feel something is the case.
It's not meant as an exhaustive source that is correct about everything - we're talking about philosophy, such a source doesn't exist in the first place. Therefore, decrying something on the grounds that it is possibly not correct is the worst kind of response: If that is the bar, nobody may respond.
> Human beings can also not distinguish between fantasy and reality (on average)
this statement is prima facie untrue, and quite ridiculous sounding
it is certainly the case that humans can sometimes not tell without checking, which is it's important to verify the output is correct rather than relying on our own sometimes faulty wrongness detection capabilities
the AI output could have been completely made up, hence why it can't be relied upon for factual claims like you relied upon it for factual claims
> this statement is prima facie untrue, and quite ridiculous sounding
If it sounds ridiculous to you, I invite you to speak to some actual human beings. They believe so many things that are patently ridiculous (and frequently contradict) it's like stepping into an ocean of lies. I'm honestly surprised that this specific part of my statement is that weird to you.
> it's important to verify the output is correct rather than relying on our own sometimes faulty wrongness detection capabilities
This is a step you must take with both AI and people. Given that this is the case, I do not see a difference in value in terms of the reliability of facts. If anything, you could make a case that human imagination is more valuable (because the AI is just a mathematical reproduction of whatever imagination went into it). That said, I can't think of why it would be the case outside of a human superiority angle, which is tedious.
> the AI output could have been completely made up
Hence why I called it a springboard for further discussion and not an encyclopedia of absolute fact.
> If it sounds ridiculous to you, I invite you to speak to some actual human beings
sure, I will speak to you
below are 10 propositions, each either fantasy or reality. I invite you to try to determine which is which, then we can find out whether the comment about the average is true:
I'd also hasten to add that Nussbaum and Singer aren't exactly known for doing philosophy for philosophers. They're both public intellectuals with political axes to grind.