Editor's note: This is the fourth in a series of Hoover Hog interviews.
____________________
INTRODUCTION
Perhaps it isn't an accurate memory. An entomologist's disinterested assurance would be enough. And a child is surely mistaken, in any event, about the implications. Anthropomorphism is an adaptive trait, and a trap. Insects are machines. Robot vessels. Complex knots of elementary ganglia bound to a dumb instinctive script. So let it all be forgotten. Perhaps until years later when the boy draws his bow and pierces the trunk of a grazing beast. A good shot, to make his father proud. The animal hurries in frantic limping instinctive desperation to a small thicket where it will nest to die. Exsanguination or cardiac response. It doesn't much matter. You simply wait and follow the blood trail. Properly cured, venison makes fine chili.
Sometimes known as "Utlilitarian," an obscure thinker now writes primarily under the pen-name "Alan Dawrst." He is distinguished as a kind of intellectual anomaly -- a Rara avis among cerebrally adventurous dragon chasers. His project is grounded in empiricism, in abstruse math and Bayesian rationality; yet his conclusions seldom reduce to form. Once a staunch atheist, Dawrst converted to Christianity when the logic of Pascal's wager proved inescapable. His worldview is wise to edging transhumanist speculation, yet he remains deeply skeptical of the boundless optimism expressed by Kurzweil's wide-eyed futurephiles. His work considers the prospect of Mutiversal suffering. The problem of hell. The fear-fraught plight of wildlife. And the terrible child-intuited possibility, however remote, of insect sentience.
Dawrst is perhaps tragically preoccupied. He is troubled. He is sincere. Although his "essays on reducing suffering" may provoke weird Borgesian fits on first pass, the problems he identifies entail potentially staggering consequences. Yet he faces the all of it with stolid determination. He doesn't cop to futility. He doesn't seek refuge in obscurantist swamps. Utilitarianism is merely one available method, of course. An inexactly conceived philosopher's tool. And if the primacy of suffering should at some point temp a more dangerous spiraling skein of deduction, the urgency of that event must be postponed, and perhaps denied. The wages of judgment tip the ledger, after all, demanding Christian humility. The work requires prudence.
There are facts and there is math. There are terrible uncertainties at every grasping turn. Bias and confidence serve merely to fog a shrouded sensory order, to blunt the enormity. From the cozy luck-enabled redoubt of our cultivated subjective optimism, dire and vast implications may be forgotten or dismissed. But nothing can erase a brute reality. You don't know what you don't know, and the situation may be worse than your capacity to imagine. Someone is being tortured. A sore-infested pig is encased in a tiny pen. Dragons rend and devour the living flesh of a cow. Whether you blame Darwin or god, there is the unyielding, irrevocable fact of suffering. Easily shelved, until you refuse. Or until you can no longer. Even if it is better, or at least easier, not to be so troubled.
____________________
WORLDS OF SUFFERING: AN INTERVIEW WITH ALAN DAWRST
HOOVER HOG: You approach moral problems from a the perspective of utilitarianism. Why? What attracts you to the utilitarian pedigree, when there are so many other ways to sort through ethical questions?
ALAN DAWRST: I'm not terribly interested in a lot of aspects of ethics
as it's traditionally construed: virtue,
rules for good conduct, judgement of actions, etc. I'm more concerned
with getting things done. Here's a simple illustration, extended from
the example of the drowning child that Peter Singer gives in his essay,
Famine, Affluence, and Morality.
Suppose you come across a pond in which 15 children are in the process
of drowning. Do you sit and ponder whether it's part of the good life
for you to rescue them? Do you ask whether there's a universal moral
law commanding you to pull them out? I suppose you could. But my
response would be to think, Wow, if I were in their position, I wouldn't want to drown. Let me see what I can do to help them!
That simple feeling of empathy seems to me the most natural, basic
response to the situation. You can talk about all kinds of abstract
moral values and principles of righteousness, but at the end of the
day, those don't have anything to do with why I actually care.
In practice, the world really is a big pond with kids drowning all the time: There are billions of people suffering from preventable poverty, disease, and violence, billions of animals enduring dreadful lives on factory farms, and orders of magnitude more animals in the wild that are sick, hungry, or being eaten alive. My response is, "That's terrible! What can I do to make things better?"
Your writing is overwhelmingly concerned with the problem of suffering. Why should people care about suffering, other than their own?
The most I can say is that many people are concerned with the suffering of others, and I'm thankful for that. There's nothing I could say to a paperclip maximizer that would make it start caring about suffering, for example.
Your emphasis on reducing suffering leads you to consider problems that may never occur to most people, such as the possibility of invertebrate sentience and the harm-multiplying potential of "laboratory universes." Why are you drawn to areas that are so widely ignored, even among ethicists?
Well, to some extent, it's because areas that have been ignored can potentially offer high expected returns from being investigated (sort of like undervalued securities). But unlike the stock market, where most investors care a lot about their expected returns, there aren't many people who try to consistently maximize the expected value of their actions even when the probabilities involved become really small and the payoffs really big. So there's not much of an efficient market hypothesis for prevention of suffering.
I think the possibility that insects can suffer is a good example. The scientific jury is still out as to whether invertebrates can feel pain, and in light of the human tendancy to be overconfident, I wouldn't assign a probability lower than, say, 0.001 to insect suffering. (My personal probability is closer to 0.2, but I'm not an expert. I know one entomologist whose probability is close to 1.0.) The number of insects in the world is vast (10^18, a billion billion), so the expected amount of insect suffering is still huge. Even if your probability of insect sentience were one in a million, the expected number of suffering insects would still outnumber humans 100 times over.
Yet, I'm unaware of any animal-welfare organizations addressing this issue, except for one spoof website and perhaps a few vegans who want to avoid honey for reasons of bee suffering. From a PR perspective, this may be sensible: If a lot of people are still resistant to caring about, say, chicken welfare, they'll be put off altogether by the suggestion that they should care about the feelings of the flies in their house. (I should add that I don't think concern for insect welfare necessarily implies we should avoid killing insects. Indeed, the opposite may be true: If I were a fly, I might prefer being squashed by a human as quickly as possible rather than dying slowly by disease or parasites a few weeks later.)
Still, it's not the case that humans are powerless to prevent insect suffering. To take one example, animal-welfare organizations could pay farmers to replace their current insecticides with other insecticides thought to be faster-acting and less painful. I don't necessarily recommend this as an optimal use of resources, but the fact that this crude proposal competes in cost-effectiveness with, say, distributing literature on factory farming is a proof-of-concept for why insect welfare shouldn't be taken off the table.
Getting back to your question, I think part of the reason for neglect of these issues is scope insensitivity: The fact that preventing 1000 times as much suffering doesn't feel 1000 times as satisfying. There's a natural desire to feel like you're doing something to help,
but exactly what that is may not seem as important. Yet, even using basic measures like US$/DALY, the cost-effectiveness of something as tame as different population-based health interventions can vary by several orders of magnitude. There are a few organizations (e.g., GiveWell and, hopefully soon, Giving What We Can) that aim to help donors choose more efficient causes. And of course, cost-effectiveness analysis
is what many economists do all the time, though often in crude and
misleading terms where costs and benefits have to be translated into
dollars. (A QALY in developed countries is worth, say, $60,000, while interventions in the developing world for tens or hundreds of dollars per QALY aren't funded.)
It's somewhat puzzling why economists, who are trained to think in
terms of cost-effectiveness, aren't interested in speculative high-risk
high-expected-return topics like insect welfare. I guess the reason is
just that they have a different objective function.
It's interesting that a lot of utilitarian decision-making principles
are widely taught in the business world, in the context of strategic management and corporate finance -- only there, the decision rule says Maximize expected net present value,
rather than, say, Maximize expected reduction in suffering.
(Businesses are also sometimes risk-averse, while altruists should try not to be.)
You are a former atheist who converted to Christianity on account of Pascal's Wager. This is rare, to say the least. Can you describe how it happened?
I like Eliezer Yudkowsky's description of utilitarianism: Shut up and multiply.
It's very much a bullet-biting mindset where, if someone can show that
action A clearly has higher expected value than action B, you're
obligated to go with action A. As a result of this, my beliefs have
been (and remain) highly volatile, subject to potentially dramatic
changes as I learn new information or hear new arguments.
Ever since I was about six years old, I had been a materialist
atheist, and I regarded religion in the same way as, say, ghosts and
UFOs. But as I thought about the situation from a shut up and multiply
standpoint, I decided that a simple analysis in terms of factual
probability was inappropriate, because of the magnitude of the payoffs
involved. In simple terms, any nonzero probability of
Christianity (or, more precisely, differential in probability between
Christianity and hypotheses according to which I'll be punished
eternally iff
I convert to Christianity) amounts to infinite expected suffering,
which ought to outweigh any finite costs. (In practice, this is
impossible on account of akrasia. Christians would just call that the
problem of human fallenness, I suppose.)
What I described, of course, is just Pascal's wager, though Pascal himself
didn't mention hell but merely losing out on heaven. It's phrased in
fancy decision-theoretic language, but for me, it amounts to nothing
more than an intellectualized fear of hell: The intuition that, as one
site notes, Eternity is a very long time to be wrong,
and I'm not willing to risk it.
In any event, after about a year of playing around with the wager and trying to come up with rationalizations for why it didn't hold, I bit the bullet and decided to try becoming a Christian. The reasons for Christianity instead of, say, Islam were a combination of factual probability, non-factual Pascalian considerations, and convenience. This decision isn't necessarily set in stone: Given persuasive evidence in favor of another religion (including atheism), I could change my mind. But after almost two years of research into various other possibilities, this hasn't happened.
It seems safe to assume that most philosophical
utilitarians are nonbelievers, but you have argued that the demands of
infinite decision theory unite utilitarianism with theistically
predicated moral reasoning, particularly as expressed in Christian
apologetics. How so?
I should clarify that while Pascal's wager is a very natural idea within a utilitarian mindset, the argument itself is egoist, appealing to my own self-interested desire to reduce my chance of hell. (It's theoretically possible that utilitarianism could recommend defying God's will in order to prevent large numbers of other people from going to hell, for instance.)
But it is true that utilitarianism shares some commonalities with Christianity (and religion generally) to the extent both recognize that individual worldly
happiness for an individual is trivial in comparison to the vast
expected value of devoting effort to other things. (Think of the common
charge that utilitarianism is too demanding.)
As I mentioned before, in practice, neither utilitarians nor Christians
can actually live up to this standard, but they can do better than not
trying at all.
In reading your work, it is easy to come away with the sense that your commitment to Christianity derives almost exclusively from intellectual or logical considerations. Your reasoning is informed by surreal numbers and probabilistic inductions of potentially Vast or infinite consequence. Is there a spiritual dimension to your faith? Or are you simply cornered by a rational gambit?
Jerram Barrs made this comment: I
often have people say to me that they are not a religious person, as if
they belong to some kind of different category. My response is always
to say, 'In the sense that you mean, I am not religious either. I am
interested in what is true.'
I assign a lower subjective
probability to Christianity than Barrs does, but I agree with the
sentiment: If I were convinced that Christianity had probability zero,
or was far less probable than another religion threatening eternal
punishment, I would have no interest in it. To quote Paul, if Christ has not been raised, our preaching is useless and so is your faith.
(1 Corinthians 15:14).
Still, it's true that Christianity has spiritual aspects, and I do try to live up to them. It's also emotionally helpful that I can sympathize with, say, Jesus's concern for the poor and outcast, though even there, Jesus made some statements that are tough, such as Mark 14:3-6 -- the other people present said exactly what I would have felt in that situation!
When I mentioned to my wife that I was corresponding
with someone who had disavowed atheism because of Pascal's Wager, her
response was to say, well, I'm glad it finally convinced someone.
And it's true: the Wager is the sort of thing people may encounter in
Philosophy 101, and never think of again. Since you find the argument
compelling, do you have a theory as to why it isn't more persuasive?
Well, one reason is that it's frankly hard to live up to (I'm certainly not able to do so completely myself). Religion is difficult: Even people who assign 100% probability to the existence of their religion's hell sometimes can't avoid doing things that they think God considers sinful, and it's even easier to make up excuses when you're not certain if the religion is true. I do wonder how many more people would agree with Pascal's wager if it just required, say, pushing a button.
In addition, many people are not aiming to minimize expected suffering. Some value other things, such as personal honesty, more highly than avoiding eternal torment. Still others are genuinely convinced that there's more likely a god who would be angry when people try to appease him for selfish reasons than when they ignore him altogether. This is possible, but I at least don't think it describes the portraits of God in many religions, where my impression is that fear of punishment is a perfectly valid reason for obeying God's commands, even if it may not be as desirable as willing discipleship.
There are a number of other reasons given. I think some do have merit, but they often have the feel of being arguments mined for in order to reach a desired answer, rather than ideas one would naturally come up with in thinking about the situation. Perhaps the best argument is that there are simply better potential sources of massive suffering to be worried about, but I don't understand why one can't take Pascal's wager while continuing to explore those other possibilities. (Admittedly, though, practicing religion does take some time.)
There is a vast literature on the the Wager, and most of
it is highly critical. Common objections center on the problem of
choosing the correct
god, or on the the alleged conceit that
God should reward faith that is determined by fearful devotion rather
than goodness. I realize this is an immense subject, but can you
provide a sense of where and why you believe the usual criticisms fail?
I don't really see what the many gods
objection is getting
at, other than to point out that religion is a risky business because
you could very easily be wrong. Yes, that's an extremely unpleasant
state of affairs, but who says can't that be the way things are? The
suffering of billions of animals in the wild is awful, but that doesn't
make it untrue. We just have to accept it and do the best we can.
Some people talk about the thousands of possible religions as though they were equally good candidates, but even apart from factual-truth considerations, I don't think that's the case. Many religions teach only of a general underworld
for the dead, like the early Jewish Sheol or the Greek Hades,
where all people end up. A number of others (certain branches of
Buddhism, Hinduism, Jainism, Sikhism, Taoism, and Zoroastrianism) do
have punitive hells, but they're usually finite and for purgative
purposes only. Many of these punishments, moreover, are for general wicked deeds,
rather than refusal to follow a particular god. In fact, the only two
religions of which I'm aware that really fit into the standard believe or face eternal punishment
model are Christianity and Islam. (If readers are aware of others, I'd love to hear about them!)
In your view, people (like me) who assign zero or
negligible probability to the possibility of there being a creator-god
are biased by overconfidence.
Can you explain what this means?
And also, do you allow that a strong atheist argument could overcome
the bias? Or does epistemological uncertainty imply that overconfidence
is insoluble?
The overconfidence effect is the observation that people feel more
certain about their beliefs than they should be. For instance, the Wikipedia article gives the example of answers that people rate as '99% certain' turn[ing] out to be wrong 40% of the time.
In my piece on Pascal's wager, I added this qualification:
it's not always the case that we should account for overconfidence by increasing the probability we assign to an outcome we perceive as unlikely. As Eliezer Yudkowsky notes, there's a
need for our probabilities to sum to 1-- there are simply too many possible hypotheses out there for us to increase the probability we assign to each of them for fear of overconfidence; instead we need to takeinto account both human overconfidence and the desire-to-dismiss, and also the temptation for humans to make up silly things with huge consequences and claim 'but you can't know I'm wrong.'I think Christianity (and other existing religions) deserve higher probability than some random member H of the space of possible hypotheses if only because the fact Christianity has even been thought of is nontrivial evidence. Indeed, I'd say that any idea that has ever been imagined, at least in our region of the universe, deserves higher probability, ceteris paribus, than H for the same reason. (I sayat least in our region of the universebecause it's possible that every hypothesis that can be imagined in finite time has been or will be imagined in some universe.)
In view of the last point, I don't think any argument could reduce the probability of Christianity (or any other religion) to zero -- epistemological uncertainty reigns. Of course, there's a point at which even nonzero probabilities have to be ignored due to finite computational resources, but in light of the immense potential consequences, religion for me doesn't fall below that level.
You mention Eliezer Yudkowsky. It seems that he may have
had you in mind when he inconclusively sought to outline a strategy by
which an Occam-abiding mind
could avoid being dominated by tiny probabilities of vast utilities
-- a problem he describes as Pascal's Mugging.
Yudkowsky focuses on dangers that might arise if a Friendly AI, lacking an intuitive understanding of dominant mainline
probabilities that we assign through reason, were to be mathematically
pranked into destroying the world (or something) in a scenario where
the probabalistic stakes were presented on sufficiently grave and vast
computational terms.
Do you have any thoughts on the potential problem that
Yudkowsky identifies? And do you think it is relevant to your current
reasoning regarding the Wager? When one's epistemic framework allows
that supernatural forces may test vast or infinite degrees of
suffering, how does one avoid being mugged
?
I don't know if one should avoid being mugged. While it's not obvious to me that Solomonoff induction per se is the ideal way to arrive at one's prior probability distribution -- for example, why should there be a fundamental link between universal Turing machines and epistemology? -- I think something like it is the right way to go. (Unfortunately, Solomonoff induction itself is uncomputable, but we can try to approximate it.) If we had an accurate assessment of the probability distribution over possible outcomes by a Solomonoff-style AI, I think we should just shut up, multiply, and do what the calculation says. If that means handing over $5 to the mugger, so be it.
As far as the mugging, Nick Tarleton suggested
a similar scenario in which an AI might devote vast resources searching
for a magic button to generate vast amounts of happiness (or, of more
visceral interest for me, prevent vast amounts of suffering). Carl
Shulman agreed, noting that there may be ways to appeal to Dark Lords of the Matrix
who, say, live in a universe with infinite computational power or
different laws of physics. I would side with the AI here: Sure, these
investigations will almost certainly prove fruitless and wasteful, but
the potential consequences are too big to get wrong.
You have devoted considerable thought to the prospect of creating laboratory universes,
which may or may not be possible. Assuming the idea of making a
universe -- or an infinity of universes -- from scratch cannot be
dismissed as crazy sci-fi speculation, why should this possibility be a
cause for worry among those concerned with the problem of suffering?
If the probability of sentient life emerging in a random universe is greater than zero (and if other technical conditions are met), then creating infinitely many universes would almost surely create an infinite amount of suffering. Of course, the same could be said of happiness, but I think there are reasons to suggest that suffering would dominate. Briefly, the probability of intelligent sentient life emerging on a given planet seems considerably smaller than the probability of merely sentient life (at, say, the level of invertebrates or fish). Since the smallest, most numerous animals tend to breed far more offspring than can survive, most of these animals won't even live to maturity. And those that do will often have short lifespans: For instance, I would rather not exist than be born as a fly, live for a few weeks, get caught in a spider's web, and die by venom poisoning. (And even being the spider could be unpleasant at times.)
You write that suffering far outweighs happiness right now on earth,
and you seem confident that a disproportionate burden of this suffering
is experienced by animals. This perspective has impelled you to
consider the plight of animals not merely in factory farms and
vivisection labs -- which is par for the course among prominent
utilitarian ethicists -- but in nature, where an average animal's life
may be overwhelmingly defined by fear, starvation, and pain. If wild
animal suffering is, as you suggest, an open problem,
why do
you think it is not taken more seriously among philosophers and animal
welfare activists? And if natural animal suffering is a legitimate area
of moral concern, what is to be done?
That's a good question -- I'd like to know the answer! The issue
probably doesn't cross people's minds very much, and even when it does,
they may not register it as a problem. I remember as a kid loving to
watch videos of predacious dinosaurs chasing, catching, and biting the
flesh of their prey; the videos were just exciting, and I didn't give a
second thought to how agonizing the Stegosaurus must have found it.
Some people who do recognize wild-animal suffering argue that cruelty
is part of nature, so that's just the way things are,
and we should get used to it. (Of course, cancer is entirely natural as
well, and most people don't seem quite so willing just to get used to
it.)
The absence of interest on the part of animal welfarists is more
surprising. Part of it may result in a desire to avoid sounding too
extreme -- meat-eaters sometimes use concern for wild animals as a
reductio against concern for animals in general -- but fear of sounding extreme
doesn't describe a significant fraction of the animal-welfare movement (unfortunately).
There's also the mentality that ethics begins at home
: That humans need to get their own moral house in order before worrying about suffering for which they aren't responsible.
In some cases, this may stem from a view of morality as being about keeping your own hands clean
from blame, rather than, as I see it, trying to fix problems (focusing on the biggest problems first).
But why the silence on the part of utilitarians? There are some concerned with the problem, though I haven't heard much discussion in mainstream philosophical circles, or among welfare economists. Peter Singer argued:
I agree that intervention in nature, at least given our current limited knowledge of ecology, could easily result in massive failures. Still, there seem to be cases in which this argument clearly does not hold, as Tyler Cowen points out in section III.1 of his paper on the subject. And even if we currently lack the ability to make a significant impact, the potential returns from at least researching the topic in preparation for action at some point in the future seem potentially high.As for wild animals, for practical purposes I am fairly sure, judging from man's past record of attempts to mold nature to his own aims, that we would be more likely to increase the net amount of animal suffering if we interfered with wildlife, than to decrease it. Lions play a role in the ecology of their habitat, and we cannot be sure what the long-term consequences would be if we were to prevent them from killing gazelles. [...]
[However, i]f, in some way, we could be reasonably certain that interfering with wildlife in a particular way would, in the long run, greatly reduce the amount of killing and suffering in the animal world, it would, I think, be right to interfere.
Another subject that you stand virtually alone in taking seriously is the possibility of insect sentience. Among animal behaviorists and entomologists, there seems to be general consensus that is that this is impossible, or at least profoundly improbable. Why aren't you convinced? And again, if it should turn out that bees and fruit flies have the capacity to suffer, what is to be done?
There are several entomologists who take insect pain seriously. In her excellent review of the subject, Jane A. Smith explained:
I don't deny that there is also a significant body of literature against insect pain, but the question seems to me still unresolved. Besides, there's a big difference between being factually persuaded by a proposition (to the point where one would be willing to say,The well-being of invertebrates used for research is being taken increasingly seriously. Wigglesworth (1980), for example, has suggested that for practical purposes it should be assumed that insects feel pain and that they should, therefore, be narcotized in procedures that have the potential to cause pain. Cooper (1990) has identified several practical ways in which the well-being of invertebrates might be promoted. These include:
- providing husbandry conditions that match, as closely as possible, those preferred by the species in the wild;
- assuring high standards of care, provided by staff with an interest in invertebrates;
- avoiding unnecessary or insensitive handling or restraint;
- narcotizing the animals for any invasive or disruptive procedures and during prolonged restraint (some methods of anesthesia are described by Cooper, 1990) and;
- where possible, avoiding the use of the more "complex'' species.
To this list might be added:
- attempting to kill invertebrates by the most humane methods possible and;
- providing suitable guidance and training for all involved in the care and use of these animals.
I believe that insects do not feel pain) and making the proper decision-theoretic choice. On factual grounds, I could say with 99.98% certainty that I won't get into a car accident the next time I go somewhere. There's far more evidence that I won't have an accident than there is for most things people believe. But do I still wear a seatbelt? Of course!
What is to be done? Well, on a very small scale, people can make sure that when they kill insects they do so as quickly as possible. Injured insects (say, on the sidewalk) can be squashed to be put out of their misery. But while compassionate, these actions address a minute sliver of the expected insect suffering that exists. I remain open to suggestions for larger-scale action. In addition to humane insecticides, some have suggested ecosystem redesign, though given the immense complexity and fragility of biological systems (as noted before), I'm not convinced of its plausibility.
Your work is broadly informed by transhumanism. But you argue that fans of the singularity are biased by undue optimism. Why?
To a large extent, my skepticism about the Singularity is no different than skepticism about any other hypothesis on which there's widespread disagreement: Namely, that lots of intelligent people are doubtful of the feasibility or broad social acceptance of many of the technologies that transhumanists discuss.
In addition, some transhumanists approach their philosophy in the same way that many approach religion: As a source of optimism for the future (including, perhaps, immortality). There's a natural tendency to want to think that these desirable outcomes will happen. However, I don't want to paint with too broad a brush, because I know many hard-nosed transhumanists who are very deliberate in trying to avoid this. Some even agree that Singuarity scenarios aren't terribly likely -- it's just that, if they did happen, the consequences would be so enormous that the expected value of considering them is still huge.
While I can't bring myself to assign any probability to the reality of a Christian hell, my brush with your -- and Yudkowsky's -- writings leaves me wondering if a secular version might be a live possibility. I'm imagining a kind of sentient AI program that would create -- or simulate, or embody -- real and vast suffering. It just seems that if you take a physicalist view of nature -- and scientific enterprise -- that simulated consciousness has the force of inevitability. And unless human nature changes radically, it seems likely that from the safety of their cybernetic skinner boxes, at least a few sadistic hobbyists won't be able to resist the temptation to create worlds of interminable suffering. What is your thinking? Is the problem of simulated suffering something that merits exploration?
It's not at all obvious to me that computer-simulated consciousness is possible, but I maintain some probability that it is, and in that case, the possibility of torturing simulations or creating simulated hells could be very real. There's also a possibility of extraterrestrial civilizations that explicitly seek to cause pain to other organisms or, more likely (and less anthropomorphically), that are completely indifferent to whatever qualia they happen to create.
As Nick Bostrom has pointed out, if simulation is possible, we ourselves may very well be in one. In view of the vast amounts of suffering in the world, our simulators could very well be of the indifferent type. Perhaps they were trying to model the evolution of the universe, and the events on earth simply happened to be part of that (though unless one subscribes to something like functionalism, it's not obvious that mathematical simulation of the atoms composing a human brain would actually give rise to consciousness). Or perhaps our simulators were aiming to explore different types of minds to see how they react in various environments.
If we are simulations, it would be interesting to ask why the concept of hell is so widespread among a variety of cultures. Is it possible that our simulator introduced the idea to give people a foretaste of what s/he had in mind following some of their deaths? Even for those who can't bring themselves to assign nonzero probability to conventional religions, this possibility alone may introduce a non-negligible probability of eternal torment. The existence of notions about hell is a fact about the world that needs explaining, and this is one hypothesis that does so.
Whether people buy Christianity or not, I do think it's important to give serious thought to the possibility of hell in general. I like this quote from Pascal's Pensees (199-206):
Let us imagine a number of men in chains and all condemned to death, where some are killed each day in the sight of the others, and those who remain see their own fate in that of their fellows and wait their turn, looking at each other sorrowfully and without hope. It is an image of the condition of men.
A man in a dungeon, ignorant whether his sentence be pronounced and having only one hour to learn it, but this hour enough, if he knew that it is pronounced, to obtain its repeal, would act unnaturally in spending that hour, not in ascertaining his sentence, but in playing piquet. [...]
When I consider the short duration of my life, swallowed up in the eternity before and after, the little space which I fill and even can see, engulfed in the infinite immensity of spaces of which I am ignorant and which know me not, I am frightened and am astonished at being here rather than there; for there is no reason why here rather than there, why now rather than then. [...]
The eternal silence of these infinite spaces frightens me.
Good interview, as always. I like how you ask challenging, probing questions without ever betraying condescension or hostility (unlike the more smug and frankly insufferable atheists out there-- Bill Maher certainly comes to mind) when the subject's worldview differs from your own.
I do in fact find Pascal's wager to be compelling, though more from a moral point of view than a spiritual one, which is probably not how it was intended. If there's a God, then one suspects he wants you to live a good life, since God (if he IS at all), must be good Himself, by definition. I don't see how the wager takes you easily into any form of theology (Christian over Muslim say, or even Catholic over Protestant).
Posted by: Andy Nowicki | October 30, 2008 at 04:00 PM
God is by definition good? I find Alan Dawrst's perspective on God worth listening to precisely because he bites the bullet on the Problem of Evil - i.e., given the suffering in our world, it's clear that if a creator God exists in any form, he is, in HUMAN terms, evil. I take that to be what he means when he says "In view of the vast amounts of suffering in the world, our simulators could very well be of the indifferent type."
Singer I take to be arguing for the point that, whatever ethics we do, it is important that it be understandable for ordinary people. That doesn't mean we just accept all our intuitions and follow our empathy without doing ANY ethics. There are a few situations - ponds of drowning children - where doing long, complicated ethics is clearly not called for, but it's optimistic to suppose that this describes anything but a tiny minority of the moral decisions we are faced with.
Utilitarian calculations, too, must have an end. Even if we alight on a cheap sort of pleasure/pain oriented utilitarianism, with no thought for justice or rights, we must decide the amount of resources to spend calculating the utilities. And in order to do that, we must decide how much time to spend calculating THAT amount. And so on. It must have an end. But just because it must have an end - we must rely on estimates, intuitions, outside of pure utilitarianism do even DO utilitarianism - doesn't mean that we shouldn't do it at all. Same with all ethics.
Posted by: Sister Y | October 31, 2008 at 12:17 PM
Sister, the "in HUMAN terms" is the rub. I have no proclivity or inclination to defend God; I'm just saying that if God is, then by definition he is the origin and the essence of what is good. From a human standpoint it's well nigh impossible to square the goodness of God with much of what He either allows or causes to happen. But I do think that, hair-splitting and sophistry aside, most of us have a pretty good idea of what it means to make morally and ethically sound choices in life, regardless of our theological orientation, or lack thereof. And that's where I actually find Pascal's wager to be persuasive.
Posted by: Andy Nowicki | October 31, 2008 at 06:52 PM
I'm not sure what the definition of 'good' is, if it's not the one that proceeds from human intuition. To say that God is good, but then to follow that with the idea that His 'good' might seem perfectly evil to us, makes absolutely no sense to me. I'm also having a hard time getting my head around this statement:
",,,since God (if he IS at all), must be good Himself, by definition."
Either you're saying that it's simply impossible for a creator-being to be evil, to which I ask "how do you know, and what standard are you measuring Him against", or you're simply saying 'good equals god' in a purely tautological sense, which I'm not even sure IS a statement, since it contains no information i.e. it's a definition pointing nowhere. It's sort of like telling somebody who's never seen a rock, "You'll know it by it's rock-like features", then when asked to describe 'rock-like features', you answer back with, "You know, like what a rock has'.
Interestingly (for me, at least) I find myself lumping ethics and Pascal's wager together. In my eyes, both concepts are (or at least can be) misrepresentational. When pressed,'the wager' always deconstructs down to arguments about evidentiary details, which sort of makes the whole thing moot in my eyes. And ethical principals proceed from the personal, emotional milieu of the aggregate, are eventually defined and codified, then presented back to the culture in a faux-objective form. Nothing wrong with that, though I prefer the more straightforward term- namely...Law.
Just my two cents.
Posted by: jim | November 02, 2008 at 03:26 PM
Sister, you may want to take a look at Toby Ord's BPhil thesis on consequentialism and decision procedures.
http://www.amirrorclear.net/academic/research-topics/ethics/consequentialism.html
Posted by: Pablo Stafforini | November 08, 2008 at 11:58 PM