Best Moral Philosophy?
Best Moral Philosophy?
Although I'm hesitant I think I would assess utilitarianism as the best moral philosophy.
The reason I'm hesitant though is because you could easily say something like this:
"There are 10 violently criminal people and 1 saint. The 10 criminals are on one train and the saint is on the other. You can only save 1 train from exploding" Then it would seem that saving the 10 criminals would be the "utilitarian" choice.
However I say that choosing the metaphorical saint could be the "true" utilitarian choice because you have to look at the bigger picture. If you save the 10 criminals it would result in more "bad" for more people if they go unchecked because their bad influence would ripple out and cause harm to a greater amount of people.
Now, I'm sure there are plenty of "what if"s like "what if the criminals all changed their ways?" but we have to look at what's most likely. Whether you like it or not, a person's past says a lot about their future behavior. We're creatures of habit to say the least. If someone is a staunch vegetarian their whole life it's not likely that one day without provocation they will suddenly start eating meat.
Also, we must consider how OUR choices affect outcomes. If people are saved based on their numbers alone it says nothing about moral repercussions. It would mean that going with the crowd is always right no matter what the crowd is doing. If we reward people who are good all along even if it defies the crowd it encourages the behavior throughout society.
I don't know if it's possible to measure happiness without something like a survey for each individual to fill out and people could easily lie. Perhaps instead of measuring the happiness felt we can measure the good deeds done. Of course that's also very subjective and somewhat fuzzy but it's far more tangible and perceivable by others.
I think it's legit to analyze what good/bad and right/wrong mean.
Imo "good" is utilitarian by its definition. "Good" is happiness. We say that things are "good" because they make us happy. Sure, you can say that sometimes people are shortsighted and I think that just goes to prove that "good" is truly utilitarian. It's not what just makes us happy temporarily, it's what gives us the maximum amount of happiness. Often choices that make us happy in the short term will make us unhappy in the long run, thus making them actually "bad" choices.
Like the veggies vs cookies argument. Cookies may make us happy in the short term but if that's all we ever eat we will likely regret it because we'll have health problems.
Right and wrong may more or less be a matter of efficiency. If you're trying to mow your yard it's "right" to do it with a lawnmower and "wrong" to do it with a spoon. Applied morally, things that are "right" give you more efficient outcomes of "good" or happiness.
The reason I'm hesitant though is because you could easily say something like this:
"There are 10 violently criminal people and 1 saint. The 10 criminals are on one train and the saint is on the other. You can only save 1 train from exploding" Then it would seem that saving the 10 criminals would be the "utilitarian" choice.
However I say that choosing the metaphorical saint could be the "true" utilitarian choice because you have to look at the bigger picture. If you save the 10 criminals it would result in more "bad" for more people if they go unchecked because their bad influence would ripple out and cause harm to a greater amount of people.
Now, I'm sure there are plenty of "what if"s like "what if the criminals all changed their ways?" but we have to look at what's most likely. Whether you like it or not, a person's past says a lot about their future behavior. We're creatures of habit to say the least. If someone is a staunch vegetarian their whole life it's not likely that one day without provocation they will suddenly start eating meat.
Also, we must consider how OUR choices affect outcomes. If people are saved based on their numbers alone it says nothing about moral repercussions. It would mean that going with the crowd is always right no matter what the crowd is doing. If we reward people who are good all along even if it defies the crowd it encourages the behavior throughout society.
I don't know if it's possible to measure happiness without something like a survey for each individual to fill out and people could easily lie. Perhaps instead of measuring the happiness felt we can measure the good deeds done. Of course that's also very subjective and somewhat fuzzy but it's far more tangible and perceivable by others.
I think it's legit to analyze what good/bad and right/wrong mean.
Imo "good" is utilitarian by its definition. "Good" is happiness. We say that things are "good" because they make us happy. Sure, you can say that sometimes people are shortsighted and I think that just goes to prove that "good" is truly utilitarian. It's not what just makes us happy temporarily, it's what gives us the maximum amount of happiness. Often choices that make us happy in the short term will make us unhappy in the long run, thus making them actually "bad" choices.
Like the veggies vs cookies argument. Cookies may make us happy in the short term but if that's all we ever eat we will likely regret it because we'll have health problems.
Right and wrong may more or less be a matter of efficiency. If you're trying to mow your yard it's "right" to do it with a lawnmower and "wrong" to do it with a spoon. Applied morally, things that are "right" give you more efficient outcomes of "good" or happiness.
- guest_of_logic
- Posts: 1063
- Joined: Fri Jul 18, 2008 10:51 pm
Re: Best Moral Philosophy?
Hello Orenholt,
Good topic, morality is not discussed nearly enough on this forum IMO.
Disclaimer: I haven't studied moral theories in any depth, and it's completely possible that the criticism I make doesn't apply to any utilitarian theory as actually advocated by moral philosophers. In any case, here is a little bit of critical thought on utilitarianism:
In its concern for aggregate well-being, it's possible that individual suffering gets overlooked. For example, let's say in a simplistic scenario there is a a moral choice between two outcomes. The first outcome has an aggregate well-being score of 20 (where a positive score is "happiness", a negative score is "suffering", and the greater the score, the greater the degree of happiness/suffering), whereas the second outcome has an aggregate well-being score of 25. It seems obvious that by at least a naive utilitarian standard, we should choose the second outcome, right? But what if I told you that in the first outcome, among all of the people involved (let's say there are 10 people), there is no suffering whatsoever, and all of the happiness is equally distributed (so, each person has a well-being score of 2 (i.e. 20/10)), whereas in the second outcome, nine of the people each have a well-being score of 5, whereas one of them has a well-being score of -20 (so, 9 x 5 - 20 = 25). In other words, whilst the second scenario seems superficially to be better, in fact it would be outrageous to choose it given that one individual in it is going to be suffering that greatly. As I said, I'm not familiar with actual utilitarian philosophy, so it's possible that in the real world of utilitarian philosophy, there are safeguards built in to deal with this sort of choice, but basically my point is that we need to be as concerned about individual suffering as we are about aggregate well-being.
I have done a little bit of private writing on this issue, although I don't have it with me at the moment. Basically, I've suggested some additional principles on top of concern for aggregate well-being. If you're interested, I could post about them when I get access back to my writings.
One interesting thing to me about utilitarianism is that in the ideal (or at least it seems this way to me), it requires omniscience, or at least perfect knowledge of all future counterfactuals: that's the only way to be sure that the choice you make genuinely *will* lead to a better moral outcome. It seems to me, then, that any practical utilitarian philosophy is going to be an approximation of the real (ideal) thing, in which to some extent we rely on general principles rather than specific analysis.
I'm not sure I could label my own moral approach, but I'm generally more partial to a rights-based morality. I think there's definitely a place for utilitarian thought, I just find some of the utilitarian positions of e.g. Peter Singer to be too dismissive of the rights of individuals, e.g. his notion that (and I hope I'm recalling this correctly) it is acceptable in certain circumstances to kill disabled infants due to the negative impact that raising them will have on their family/community. To me, the right of the infant to life trumps those other considerations - I suppose for a similar reason to that which I outlined above, i.e. that we need to consider individual suffering (and/or death) as much as we need to consider aggregate well-being.
In terms of the meaning of good and bad, I agree that good is linked to happiness, although I would use a different term, "well-being", because I think it's broader and more inclusive - "happiness" can have shallow connotations. I also think we need to be careful, because it's not always clear exactly what *does* contribute to our well-being. The prevailing philosophy on this forum is a case in point - the recommendations the house philosophers would make for one's well-being are quite different than the average person's. So, when we assess happiness/well-being, whose definition and recommendations do we use? Do we grant each person their own definition? And what if a person is wrong about what contributes to their own well-being? Do we have the right to force a truer definition upon them?
Good topic, morality is not discussed nearly enough on this forum IMO.
Disclaimer: I haven't studied moral theories in any depth, and it's completely possible that the criticism I make doesn't apply to any utilitarian theory as actually advocated by moral philosophers. In any case, here is a little bit of critical thought on utilitarianism:
In its concern for aggregate well-being, it's possible that individual suffering gets overlooked. For example, let's say in a simplistic scenario there is a a moral choice between two outcomes. The first outcome has an aggregate well-being score of 20 (where a positive score is "happiness", a negative score is "suffering", and the greater the score, the greater the degree of happiness/suffering), whereas the second outcome has an aggregate well-being score of 25. It seems obvious that by at least a naive utilitarian standard, we should choose the second outcome, right? But what if I told you that in the first outcome, among all of the people involved (let's say there are 10 people), there is no suffering whatsoever, and all of the happiness is equally distributed (so, each person has a well-being score of 2 (i.e. 20/10)), whereas in the second outcome, nine of the people each have a well-being score of 5, whereas one of them has a well-being score of -20 (so, 9 x 5 - 20 = 25). In other words, whilst the second scenario seems superficially to be better, in fact it would be outrageous to choose it given that one individual in it is going to be suffering that greatly. As I said, I'm not familiar with actual utilitarian philosophy, so it's possible that in the real world of utilitarian philosophy, there are safeguards built in to deal with this sort of choice, but basically my point is that we need to be as concerned about individual suffering as we are about aggregate well-being.
I have done a little bit of private writing on this issue, although I don't have it with me at the moment. Basically, I've suggested some additional principles on top of concern for aggregate well-being. If you're interested, I could post about them when I get access back to my writings.
One interesting thing to me about utilitarianism is that in the ideal (or at least it seems this way to me), it requires omniscience, or at least perfect knowledge of all future counterfactuals: that's the only way to be sure that the choice you make genuinely *will* lead to a better moral outcome. It seems to me, then, that any practical utilitarian philosophy is going to be an approximation of the real (ideal) thing, in which to some extent we rely on general principles rather than specific analysis.
I'm not sure I could label my own moral approach, but I'm generally more partial to a rights-based morality. I think there's definitely a place for utilitarian thought, I just find some of the utilitarian positions of e.g. Peter Singer to be too dismissive of the rights of individuals, e.g. his notion that (and I hope I'm recalling this correctly) it is acceptable in certain circumstances to kill disabled infants due to the negative impact that raising them will have on their family/community. To me, the right of the infant to life trumps those other considerations - I suppose for a similar reason to that which I outlined above, i.e. that we need to consider individual suffering (and/or death) as much as we need to consider aggregate well-being.
In terms of the meaning of good and bad, I agree that good is linked to happiness, although I would use a different term, "well-being", because I think it's broader and more inclusive - "happiness" can have shallow connotations. I also think we need to be careful, because it's not always clear exactly what *does* contribute to our well-being. The prevailing philosophy on this forum is a case in point - the recommendations the house philosophers would make for one's well-being are quite different than the average person's. So, when we assess happiness/well-being, whose definition and recommendations do we use? Do we grant each person their own definition? And what if a person is wrong about what contributes to their own well-being? Do we have the right to force a truer definition upon them?
- Kelly Jones
- Posts: 2665
- Joined: Wed Mar 22, 2006 3:51 pm
- Location: Australia
- Contact:
Re: Best Moral Philosophy?
I hope you can do better than this shallow and mediocre examination of morality. To define morality in the first place requires the use of reason, and an interest in truth. Yet your argument for moral values doesn't refer in any way to either, so how can it be correct?"Good" is happiness. We say that things are "good" because they make us happy. Sure, you can say that sometimes people are shortsighted and I think that just goes to prove that "good" is truly utilitarian. It's not what just makes us happy temporarily, it's what gives us the maximum amount of happiness.
You go on to write:
In effect, your definition implies that if being truthful causes you to be unhappy, it's bad.Applied morally, things that are "right" give you more efficient outcomes of "good" or happiness.
Isn't it starkly obvious to you that reasoning in that way is amoral, according to its own definition, because it disconnects itself unconsciously from its own fundamental mechanism? Look at what you're asserting: at the very outset, that morality must be based on reason (otherwise you couldn't start to assert anything). Then you completely reject reason, by ignoring it as the fundamental criterion for morality. What a cock-up.
- David Quinn
- Posts: 5708
- Joined: Sun Sep 09, 2001 6:56 am
- Location: Australia
- Contact:
Re: Best Moral Philosophy?
It all comes down to your values. Whatever supports and promotes your values is "good".
- Kelly Jones
- Posts: 2665
- Joined: Wed Mar 22, 2006 3:51 pm
- Location: Australia
- Contact:
Re: Best Moral Philosophy?
I used to agree with Shakespeare's line in Hamlet: "for there is nothing either good or bad, but thinking makes it so." But on closer examination, it's shown to be a henid. If thinking evaluates something as good or bad, then thinking is the function of determining morality. That means, thinking itself equates to morality. So an irrational person is amoral.
It might seem pleasant to believe good or bad is purely arbitrary, and that one has the freedom to say anything is good or bad, but it derives from a basically irrational mindset, disconnected from its thinking (if one can call the artifacts of an irrational mind "thinking"). It is an extension of the internally contradictory postmodernist stance, that anything can be true.
It might seem pleasant to believe good or bad is purely arbitrary, and that one has the freedom to say anything is good or bad, but it derives from a basically irrational mindset, disconnected from its thinking (if one can call the artifacts of an irrational mind "thinking"). It is an extension of the internally contradictory postmodernist stance, that anything can be true.
- Kelly Jones
- Posts: 2665
- Joined: Wed Mar 22, 2006 3:51 pm
- Location: Australia
- Contact:
Re: Best Moral Philosophy?
In other words, thinking itself is the standard by which anything is evaluated as good or bad. One could say thinking = goodness.That means, thinking itself equates to morality.
Re: Best Moral Philosophy?
@guest_of_logic
You make a good point about the "amount" happiness. The same could be said for the "amount" of good deeds done if we use good deeds rather than happiness as a measuring stick. Perhaps the best outcome is the most good deeds per person?
I'd have to think more about that.
I agree that human rights are important. Maybe there should be a baseline rule that no one can be killed without their own consent.
As I said before I think that good deeds done (or at least attempted) should be the way things are measured utilitarianly rather than happiness.
@Kelly Jones
Isn't it obvious that reason and truth are REQUIRED for anything to function? Do I really need to explicitly state something so clear?
Morality is subjective other than what I've outlined. Everyone has a different idea of what makes them "happy" so I'm going with something more concrete, actions rather than feelings, but feelings must be the basis of ALL values. You can value something logically but only if it makes you happy. I'm in no way saying that truth is "bad". Only foolish people would think that the truth IS bad because they don't know how to make it work to their own advantage. For example, Bill is sad because he found out that his wife is cheating on him but what he doesn't realize is that it's better that he knows this because now he can leave her for a better relationship that's more honest and over all healthy.
You make a good point about the "amount" happiness. The same could be said for the "amount" of good deeds done if we use good deeds rather than happiness as a measuring stick. Perhaps the best outcome is the most good deeds per person?
I'd have to think more about that.
I agree that human rights are important. Maybe there should be a baseline rule that no one can be killed without their own consent.
As I said before I think that good deeds done (or at least attempted) should be the way things are measured utilitarianly rather than happiness.
@Kelly Jones
Isn't it obvious that reason and truth are REQUIRED for anything to function? Do I really need to explicitly state something so clear?
Morality is subjective other than what I've outlined. Everyone has a different idea of what makes them "happy" so I'm going with something more concrete, actions rather than feelings, but feelings must be the basis of ALL values. You can value something logically but only if it makes you happy. I'm in no way saying that truth is "bad". Only foolish people would think that the truth IS bad because they don't know how to make it work to their own advantage. For example, Bill is sad because he found out that his wife is cheating on him but what he doesn't realize is that it's better that he knows this because now he can leave her for a better relationship that's more honest and over all healthy.
-
- Posts: 3771
- Joined: Tue Sep 05, 2006 11:35 am
Re: Best Moral Philosophy?
The original question was about the best moral philosophy. Some people's values do not reflect moral choices.David Quinn wrote:It all comes down to your values. Whatever supports and promotes your values is "good".
I'd agree that utilitarianism is the best moral philosophy because it seems to me that both morality and utilitarianism are doing that which is objectively best for the greatest amount of sentient beings - limited to sentient beings because the non-sentient do not perceive positive or negative consequences of action.
If two things can be defined the same way, then they are equivalent, therefore morality = utilitarianism.
- Kelly Jones
- Posts: 2665
- Joined: Wed Mar 22, 2006 3:51 pm
- Location: Australia
- Contact:
Re: Best Moral Philosophy?
How did you evaluate that to be true? By feelings? If so, you've thrown reason out the window. If not, you're contradicting yourself. So the statement is wrong either way.Orentholt wrote:feelings must be the basis of ALL values.
Re: Best Moral Philosophy?
I evaluate it to be true through logic.Kelly Jones wrote:How did you evaluate that to be true? By feelings? If so, you've thrown reason out the window. If not, you're contradicting yourself. So the statement is wrong either way.Orentholt wrote:feelings must be the basis of ALL values.
Why do we eat food or seek shelter or so on? It's all based on the psychologically hedonistic nature of human beings.
Tell me how you can "value" something without the reason being a positive outcome of some sort.
Even seeking truth is for a positive outcome of being at peace and having understanding.
Wisdom always leads to a greater ability to accumulate happiness/peace/satisfaction.
Seriously, name just ONE thing you value that doesn't bring you ANY good feelings whatsoever, even in theory.
- Kelly Jones
- Posts: 2665
- Joined: Wed Mar 22, 2006 3:51 pm
- Location: Australia
- Contact:
Re: Best Moral Philosophy?
Then you should say that "logic is the basis of ALL values"Orenholt wrote:O: feelings must be the basis of ALL values.
KJ: How did you evaluate that to be true? By feelings? If so, you've thrown reason out the window. If not, you're contradicting yourself. So the statement is wrong either way.
O: I evaluate it to be true through logic.
A positive outcome is simply the goal sought. That is all.Tell me how you can "value" something without the reason being a positive outcome of some sort.
What you're describing is a mind driven by agitation and worry. Instead of seeking truth, they are seeking unconsciousness.Even seeking truth is for a positive outcome of being at peace and having understanding.
No, I don't regard that as wisdom.Wisdom always leads to a greater ability to accumulate happiness/peace/satisfaction.
Wisdom.Seriously, name just ONE thing you value that doesn't bring you ANY good feelings whatsoever, even in theory.
This puts it succinctly:
The common mind imagines a self
Where there is nothing at all,
And from this arise emotional states -
Happiness, suffering, and equanimity.
The six states of being in Samsara,
The happiness of heaven,
The suffering of hell,
Are all false creations, figments of mind.
Likewise the ideas of bad action causing suffering,
Old age, disease and death,
And the idea that virtue leads to happiness,
Are mere ideas, unreal notions.
Like an artist frightened
By the devil he paints,
The sufferer in Samsara
Is terrified by his own imagination.
From the Mahayanavimsaka, by Nagarjuna
Re: Best Moral Philosophy?
But it's possible (while foolish) to value things without logic. For example, maybe I value getting drunk and having one night stands with strangers despite the fact that it may have negative consequences. I'm valuing something without using logic to think through the possible outcomes.Kelly Jones wrote:
Then you should say that "logic is the basis of ALL values"
It's impossible to seek out a goal that you know will not make you happy on any level. Everything you do is to make yourself happy/satisfied on some level. Even if you decide "I'm going to prove Orenholt wrong by letting a dog bite me" you're still doing it just to get a sense of satisfaction from supposedly proving me wrong, when in fact you haven't.A positive outcome is simply the goal sought. That is all.
That's not necessarily the case. Some people believe that they will gain greater happiness through enlightenment than they would through being unconscious.
What you're describing is a mind driven by agitation and worry. Instead of seeking truth, they are seeking unconsciousness.
Whether you regard that as wisdom or not it's still true that if you're wise you'll be able to be happier.
No, I don't regard that as wisdom.
I'm sorry but it makes no sense to say that you gain nothing from from wisdom. Clearly the world would be a better, happier place than it is now if everyone were wise. To deny the goodness that arises from wisdom is silly.Wisdom.
This puts it succinctly:
The common mind imagines a self
Where there is nothing at all,
And from this arise emotional states -
Happiness, suffering, and equanimity.
The six states of being in Samsara,
The happiness of heaven,
The suffering of hell,
Are all false creations, figments of mind.
Likewise the ideas of bad action causing suffering,
Old age, disease and death,
And the idea that virtue leads to happiness,
Are mere ideas, unreal notions.
Like an artist frightened
By the devil he paints,
The sufferer in Samsara
Is terrified by his own imagination.
- guest_of_logic
- Posts: 1063
- Joined: Fri Jul 18, 2008 10:51 pm
Re: Best Moral Philosophy?
Hello Orenholt,
I'm not so sure that "most good deeds per person" helps: all this does effectively is to divide the aggregate (and the "good deeds" aggregate is at least correlated with the "happiness" aggregate in that the point of good deeds is to create happiness, at least by utilitarian standards) by a constant (the number of people involved), which does nothing to deal with or counter the problem that I raised of individual suffering sometimes outweighing (from a moral perspective) aggregate well-being. I'm also not sure that a rule that nobody be killed without their consent is good enough either, because it is still possible to suffer greatly without being killed, whilst those around you experience much aggregate pleasure - and I don't want to live in a world where the few suffer greatly for the happiness of the many, nor do I think that such a world is moral.
Elizabeth, for the reason I've already given, I don't necessarily agree that utilitarianism is equivalent to morality without a more specific definition of "utility": "best for the greatest amount of sentient beings" is kind of wishy-washy, and might include scenarios that I would object to morally.
I'm not so sure that "most good deeds per person" helps: all this does effectively is to divide the aggregate (and the "good deeds" aggregate is at least correlated with the "happiness" aggregate in that the point of good deeds is to create happiness, at least by utilitarian standards) by a constant (the number of people involved), which does nothing to deal with or counter the problem that I raised of individual suffering sometimes outweighing (from a moral perspective) aggregate well-being. I'm also not sure that a rule that nobody be killed without their consent is good enough either, because it is still possible to suffer greatly without being killed, whilst those around you experience much aggregate pleasure - and I don't want to live in a world where the few suffer greatly for the happiness of the many, nor do I think that such a world is moral.
Elizabeth, for the reason I've already given, I don't necessarily agree that utilitarianism is equivalent to morality without a more specific definition of "utility": "best for the greatest amount of sentient beings" is kind of wishy-washy, and might include scenarios that I would object to morally.