You might be interested in experimental philosophy. It's where philosophers quiz people to probe at how common a given intuition is. There's ongoing debate regarding whether the results pose a problem for certain meta-ethical positions (e.g. moral-realism). https://plato.stanford.edu/entries/experimental-moral/#MorJu...
Also of note is Utilitarianism. Utilitarianism is a moral theory that we ought to maximize the amount of happiness in the world/universe/multiverse while minimizing the amount of suffering. And that the morality of an action just relates to its consequences. So it's ok to lie/cheat/steal for the greater good.
Often, objections to Utilitarianism come in the form of "consider situation X. In X, maximizing utility requires us to do something horrible. Our intuitions scream not to do it, so utilitarianism is wrong". However, if these intuitions are unreliable, then so are the objections. Utilitarians can then come out and say 'the intuitions required for our theory are much more obvious/reliable/<other positive adjectives>, so our theory is better supported'.
But I don't want to say that Utilitarianism is the be all end all. For instance, there's Kant, who built his moral theory on abstract first principles rather than situational intuitions. I, unfortunately, am not well versed in him, so that's all the detail that I'll go into.
Now, there are a couple of points that you made which I'd like to respond to.
> If I don't agree with 1, your whole argument breaks down.
As an addendum, this is a good thing! It means that this premise, if true, directly supports the conclusion. Arguments with unnecessary premises get confusing. They're bad practice - just like dead code is bad. It's expected that you need to accept all of the premises in order to get to the conclusion.
Oh, and I think that your points above are really a rebuttal of (3.), not (1.). Even if other people have other intuitions, hopefully you still believe that (1.) I have the intuition that everyone ought to avoid causing unecessary suffering? It's just not clear that (3.) my intuition of 1. provides at least some evidence that 1 may be true.
> <virmundi's examples of questionable intuitions>
Your examples are all great, but I disagree with the point you're trying to make. 100% consensus on moral intuitions isn't necessary. They're clearly a flawed sense that is prone to error. The important question is, rather, do they lead to truth more often than they lead to falsehood? I don't know the answer to that question, but it's the vital one.
> seldom is there a Joker as in Heath Ledger's character. If a person fully believes that others don't count, at all, then the rule breaks down.
I want to take a hit at this one separately too. The existence of s/a hypothetical Joker/very real sociopaths/ is not necessarily proof against moral intuitions. These people might just have an impaired ability to sense moral truths.
And all of this leads us back to whether moral intuitions are generally mostly sort-of good indicators of truth. Here experimental philosophers could disprove this by showing great enough variance in moral intuitions. Or close off this avenue of criticism by showing a high degree of correlation between many moral views of many people.
Otherwise, questioning whether moral intuitions are evidence go pretty deep into Epistomology, the study of how we know what we know. That's a giant pile of worms. Great fun, but I've already created a monster with this post so I'll leave it.
TL;DR: It's all really a question of whether or not moral intuitions provide solid evidence for moral claims. Also, I want to see Batman again.
Also of note is Utilitarianism. Utilitarianism is a moral theory that we ought to maximize the amount of happiness in the world/universe/multiverse while minimizing the amount of suffering. And that the morality of an action just relates to its consequences. So it's ok to lie/cheat/steal for the greater good.
Often, objections to Utilitarianism come in the form of "consider situation X. In X, maximizing utility requires us to do something horrible. Our intuitions scream not to do it, so utilitarianism is wrong". However, if these intuitions are unreliable, then so are the objections. Utilitarians can then come out and say 'the intuitions required for our theory are much more obvious/reliable/<other positive adjectives>, so our theory is better supported'.
But I don't want to say that Utilitarianism is the be all end all. For instance, there's Kant, who built his moral theory on abstract first principles rather than situational intuitions. I, unfortunately, am not well versed in him, so that's all the detail that I'll go into.
Now, there are a couple of points that you made which I'd like to respond to.
> If I don't agree with 1, your whole argument breaks down.
As an addendum, this is a good thing! It means that this premise, if true, directly supports the conclusion. Arguments with unnecessary premises get confusing. They're bad practice - just like dead code is bad. It's expected that you need to accept all of the premises in order to get to the conclusion.
Oh, and I think that your points above are really a rebuttal of (3.), not (1.). Even if other people have other intuitions, hopefully you still believe that (1.) I have the intuition that everyone ought to avoid causing unecessary suffering? It's just not clear that (3.) my intuition of 1. provides at least some evidence that 1 may be true.
> <virmundi's examples of questionable intuitions>
Your examples are all great, but I disagree with the point you're trying to make. 100% consensus on moral intuitions isn't necessary. They're clearly a flawed sense that is prone to error. The important question is, rather, do they lead to truth more often than they lead to falsehood? I don't know the answer to that question, but it's the vital one.
> seldom is there a Joker as in Heath Ledger's character. If a person fully believes that others don't count, at all, then the rule breaks down.
I want to take a hit at this one separately too. The existence of s/a hypothetical Joker/very real sociopaths/ is not necessarily proof against moral intuitions. These people might just have an impaired ability to sense moral truths.
And all of this leads us back to whether moral intuitions are generally mostly sort-of good indicators of truth. Here experimental philosophers could disprove this by showing great enough variance in moral intuitions. Or close off this avenue of criticism by showing a high degree of correlation between many moral views of many people.
Otherwise, questioning whether moral intuitions are evidence go pretty deep into Epistomology, the study of how we know what we know. That's a giant pile of worms. Great fun, but I've already created a monster with this post so I'll leave it.
TL;DR: It's all really a question of whether or not moral intuitions provide solid evidence for moral claims. Also, I want to see Batman again.