![]() ![]() It would need to talk about how much weight you give to the various deontological ideas in different situations (see in two sentences for a list of these). ![]() What would a good theory that approximated deontological ethics look like? It would need to use methods from economic theory and philosophical logic (including fuzzy logic) to describe 1) tradeoffs between different values and 2) different kinds of "relativity" that are inherent in deontology. The better such a theory we can find, the more we'd be willing to trust it when it goes against stronger and stronger intuitions? The question becomes how one might try to work towards building a better set of basic elements to approximately capture most intuitions. In some ways, such as measurement of utility I think most philosophers feel that no set of basic elements has yet been found (and perhaps never will) that captures intuition enough to trust it consistently over strong intuition. ![]() They say "the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience." There are tradeoffs here. fit (to make the obvious statistics analogies). I think part of the issue here is people put different weights on how much you put on sparsity vs. Virtue ethics already captures intuitions excellently in many contexts, but it requires a lot of judgment to be exercised and is sort of silent in many important contexts as we get more tech at our disposal. The comments below are bit disorganized sorry, but I contend there's some good stuff in there. But when one carefully reflects on the whole situation, vividly imagining the lives of the five innocents who would otherwise die, and cautioning oneself against any unjustifiable status-quo bias, then I ultimately find I have no trouble at all endorsing this particular action, in this very unusual situation.Īmateur philosopher here. There's a big difference between your typical case of "harvesting organs from the innocent" and the particular case of "harvesting organs from the innocent when you have 100% reliable testimony that this will save the most innocent lives on net, and have no unintended long-run consequences." The salience of the harm done to the first innocent still makes it a bitter pill to swallow. (2) If we imagine that somehow the voice of God reassured the agent that no-one would ever find out, so no long-run harm would be done, then that changes matters. Consequentialists can certainly criticize that. To that objection, the appropriate response seems to me to be something like this: (1) You've described a morally reckless agent, who was almost certainly not warranted in thinking that their particular performance of a typically-disastrous act would avoid being disastrous. ![]() So, the argument goes, Consequentialism must endorse it, but doesn't that typically-disastrous act type just seem clearly wrong? (The organ harvesting case is perhaps the paradigm in this style.) In other words, this typically disastrous act type happened, in this particular instance, to work out for the best. The case then stipulates that the immediate goal is indeed obtained, with none of the long-run consequences that we would expect. the erosion of trust in vital social institutions). But let's see how far I can get sketching the rough case in a mere blog post.įirstly, and most importantly, the standard counterexamples to utilitarianism only work if you think our intuitive responses exclusively concern 'wrongness' and not closely related moral properties like viciousness or moral recklessness: They generally start by describing a harmful act, done for purpose of some greater immediate benefit, but that we would normally expect to have further bad effects in the long term (esp. To fully make this case would probably require a book or three. That's because I think the clash between utilitarianism and intuition is shallow, whereas the intuitive problems with non-consequentialism are deep and irresolvable. In ' Why I Am Not a Utilitarian', Michael Huemer objects that "there are so many counter-examples, and the intuitions about these examples are strong and widespread, it’s hard to see how utilitarianism could be justified overall." But I think it's actually much easier to bring utilitarianism (or something close to it) into reflective equilibrium with common sense intuitions than it would be for any competing deontological view. ![]()
0 Comments
Leave a Reply. |