AASV Swine Information Library Catalog canada géna uk

canada goose män
acquistare canada goose
canada goose citadel
fir jacket gé


In the News [ 17 ]   |   Contributors [ 120 ]   |   View All Responses [ 120 ]


Daniel Gilbert Professor of Psychology at Harvard University Psychologist, Harvard University

In the not too distant future, we will be able to construct artificial systems that give every appearance of consciousness—systems that act like us in every way. These systems will talk, walk, wink, lie, and appear distressed by close elections. They will swear up and down that they are conscious and they will demand their civil rights. But we will have no way
to know whether their behavior is more than a clever trick—more than the pecking of a pigeon that has been trained to type "I am, I am!"

We take each other's consciousness on faith because we must, but after two thousand years of worrying about this issue, no one has ever devised a definitive test of its existence. Most cognitive scientists believe that consciousness is a phenomenon that emerges from the complex interaction of decidedly nonconscious parts (neurons), but even when we finally understand the nature of that complex interaction, we still won't be able to prove that it produces the phenomenon in question. And yet, I haven't the slightest doubt that everyone I know has an inner life, a subjective experience, a sense of self, that is very much like mine.

What do I believe is true but cannot prove? The answer is: You!

Marc D. Hauser Psychologist and Biologist, Harvard University: Author, Moral Minds Psychologist, Harvard University: Author, Wild Minds

What makes humans uniquely smart?

Here's my best guess: we alone evolved a simple computational trick with far reaching implications for every aspect of our life, from language and mathematics to art, music and morality. The trick: the capacity to take as input any set of discrete entities and recombine them into an infinite variety of meaningful expressions.

Thus, we take meaningless phonemes and combine them into words, words into phrases, and phrases into Shakespeare. We take meaningless strokes of paint and combine them into shapes, shapes into flowers, and flowers into Matisse's water lilies. And we take meaningless actions and combine them into action sequences, sequences into events, and events into homicide and heroic rescues.

I'll go one step further: I bet that when we discover life on other planets, that although the materials may be different for running the computation, that they will create open ended systems of expression by means of the same trick, thereby giving birth to the process of universal computation.

Nicholas Humphrey Emeritus Professor of Psychology, London School of Economics; Visiting Professor of Philosophy, New College of the Humanities; Senior Member, Darwin College, Cambridge; Author, Soul Dust Psychologist, London School of Economics; Author, The Mind Made Flesh

I believe that human consciousness is a conjuring trick, designed to fool us into thinking we are in the presence of an inexplicable mystery. Who is the conjuror and why is s/he doing it? The conjuror is natural selection, and the purpose has been to bolster human self-confidence and self-importance—so as to increase the value we each place on our own and others' lives.

If this is right, it provides a simple explanation for why we, as scientists or laymen, find the "hard problem" of consciousness just so hard. Nature has meant it to be hard. Indeed "mysterian" philosophers—from Colin McGinn to the Pope—who bow down before the apparent miracle and declare that it's impossible in principle to understand how consciousness could arise in a material brain, are responding exactly as Nature hoped they would, with shock and awe.

Can I prove it? It's difficult to prove any adaptationist account of why humans experience things the way they do. But here there is an added catch. The Catch-22 is that, just to the extent that Nature has succeeded in putting consciousness beyond the reach of rational explanation, she must have undermined the very possibility of showing that this is what she's done.

But nothing's perfect. There may be a loophole. While it may seem—and even be—impossible for us to explain how a brain process could have the quality of consciousness, it may not be at all impossible to explain how a brain process could (be designed to) give rise to the impression of having this quality. (Consider: we could never explain why 2 + 2 = 5, but we might relatively easily be able to explain why someone should be under the illusion that 2 + 2 = 5).

Do I want to prove it? That's a difficult one. If the belief that consciousness is a mystery is a source of human hope, there may be a real danger that exposing the trick could send us all to hell.

Howard Gardner Hobbs Professor of Cognition and Education, Harvard Graduate School of Education; Author,Truth, Beauty, and Goodness Reframed Psychologist, Harvard University; Author, Changing Minds

The Brain Basis of Talent

I believe that human talents are based on distinct patterns of brain connectivity. These patterns can be observed as the individual encounters and ultimately masters an organized activity or domain in his/her culture.

Consider three competing accounts:

#1 Talent is a question of practice. We could all become Mozarts or Einsteins if we persevered.

#2 Talents are fungible. A person who is good in one thing could be good in everything. 

#3 The basis of talents is genetic. While true, this account misleadingly implies that a person with a "musical gene" will necessarily evince her musicianship, just as she evinces her eye color or, less happily, Huntington's disease.

My Account: The most apt analogy is language learning. Nearly all of us can easily master natural languages in the first years of life. We might say that nearly all of us are talented speakers. An analogous process occurs with respect to various talents, with two differences:

1. There is greater genetic variance in the potential to evince talent in areas like music, chess, golf, mathematics, leadership, written (as opposed to oral) language, etc. 

2. Compared to language, the set of relevant activities is more variable within and across cultures. Consider the set of games. A person who masters chess easily in culture l, would not necessarily master poker or 'go' in culture 2.

As we attempt to master an activity, neural connections of varying degrees of utility or disutility form. Certain of us have nervous systems that are predisposed to develop quickly along the lines needed to master specific activities (chess) or classes of activities (mathematics) that happen to be available in one or more cultures. Accordingly, assuming such exposure, we will appear talented and become experts quickly. The rest of us can still achieve some expertise, but it will take longer, require more effective teaching, and draw on intellectual faculties and brain networks that the talented person does not have to use.

This hypothesis is currently being tested by Ellen Winner and Gottfried Schlaug. These investigators are imaging the brains of young students before they begin music lessons and for several years thereafter. They also are imaging control groups and administering control (non-music) tasks. After several years of music lessons, judges will determine which students have musical "talent." The researchers will document the brains of musically talented children before training, and how these brains develop.

If Account #1 is true, hours of practice will explain all. If #2 is true, those best at music should excel at all activities. If #3 is true, individual brain differences should be observable from the start. If my account is true, the most talented students will be distinguished not by differences observable prior to training but rather by the ways in which their neural connections alter during the first years of training.

George Dyson Boat designer; Author, Turing’s Cathedral and Darwin Among the Machines Science Historian; Author, Project Orion

Interspecies coevolution of languages on the Northwest Coast.

During the years I spent kayaking along the coast of British Columbia and Southeast Alaska, I observed that the local raven populations spoke in distinct dialects, corresponding surprisingly closely to the geographic divisions between the indigenous human language groups. Ravens from Kwakiutl, Tsimshian, Haida, or Tlingit territory sounded different, especially in their characteristic "tok" and "tlik."

I believe this correspondence between human language and raven language is more than coincidence, though this would be difficult to prove.

John McWhorter Professor of Linguistics and Western Civilization, Columbia University; Cultural Commentator; Author, Words on the Move Linguist, Senior Fellow, Manhattan Institute; Author, Doing Our Own Thing

This year, researching the languages of Indonesia for an upcoming book, I happened to find out about a few very obscure languages spoken on one island that are much simpler than one would expect.

Most languages are much, much more complicated than they need to be. They take on needless baggage over the millennia simply because they can. So, for instance, most languages of Indonesia have a good number of prefixes and/or suffixes. Their grammars often force the speaker to attend to nuances of difference between active and passive much more than a European languages does, etc.

But here were a few languages that had no prefixes or suffixes at all. Nor do they have any tones, like many languages in the world. For one thing, languages that have been around forever that have no prefixes, suffixes, or tones are very rare worldwide. But then, where we do find them, they are whole little subfamilies, related variations on one another. Here, though, is a handful of small languages that contrast bizarrely with hundreds of surrounding relatives.

One school of thought in how language changes says that this kind of thing just happens by chance. But my work has been showing me that contrasts like this are due to sociohistory. Saying that naked languages like this are spoken alongside ones as bedecked as Italian is rather like saying that kiwis are flightless just "because," rather than because their environment divested them of the need to fly.

But for months I scratched my head over these languages. Why just them? Why there?

So isn't it interesting that the island these languages is spoken on is none other than Flores, which has had its fifteen minutes of fame this year as the site where skeletons of the "little people" were found. Anthropologists have hypothesized that this was a different species of Homo. While the skeletons date back 13,000 years ago or more, local legend recalls "little people" living alongside modern humans, ones who had some kind of language of their own and could "repeat back" in modern humans' language.

The legends suggest that the little people only had primitive language abilities, but we can't be sure here: to the untutored layman who hasn't taken any twentieth-century anthropology or linguistics classes, it is easy to suppose that an incomprehensible language is merely babbling.

Now, I can only venture this highly tentatively now. But what I "know" but cannot prove this year is: the reason languages like Keo and Ngada are so strangely streamlined on Flores is that an earlier ancestor of these languages, just as complex as its family members tend to be, was used as second language by these other people and simplified. Just as our classroom French and Spanish avoids or streamlines a lot of the "hard stuff," people who learn a language as adults usually do not master it entirely.

Specifically, I would hypothesize that the little people were gradually incorporated into modern human society over time—perhaps subordinated in some way—such that modern human children were hearing the little people's rendition of the language as much as a native one.

This kind of process is why, for example, Afrikaans is a slightly simplified version of Dutch. Dutch colonists took on Bushmen as herders and nurses, and their children often heard second-language Dutch as much as their parents. Pretty soon, this new kind of Dutch was everyone's everyday language, and Afrikaans was born.

Much has been made over the parallels between the evolution of languages and the evolution of animals and plants. However, I believe that one important difference is that while animals and plants can evolve towards simplicity as well as complexity depending on conditions, languages do not evolve towards simplicity in any significant, overall sense—unless there is some sociohistorical factor that puts a spoke in the wheel.

So normally, languages are always drifting into being like Russian or Chinese or Navajo. They only become like Keo and Ngada—or Afrikaans, or creole languages like Papiamentu and Haitian, or even, I believe, English—because of the intervention of factors like forced labor and population relocation. Just maybe, we can now add interspecies contact to the list!

Philip W. Anderson Nobel Laureate; Physicist Physicist and Nobel laureate, Princeton University

Is string theory a futile exercise as physics, as I believe it to be? It is an interesting mathematical specialty and has produced and will produce mathematics useful in other contexts, but it seems no more vital as mathematics than other areas of very abstract or specialized math, and doesn't on that basis justify the incredible amount of effort expended on it.

My belief is based on the fact that string theory is the first science in hundreds of years to be pursued in pre-Baconian fashion, without any adequate experimental guidance. It proposes that Nature is the way we would like it to be rather than the way we see it to be; and it is improbable that Nature thinks the same way we do.

The sad thing is that, as several young would-be theorists have explained to me, it is so highly developed that it is a full-time job just to keep up with it. That means that other avenues are not being explored by the bright, imaginative young people, and that alternative career paths are blocked.

Gino Segre Professor of Physics & Astronomy, University of Pennsylvania; Author, The Pope of Physics: Enrico Fermi and the Birth of the Atomic Age Physicist, University of Pennsylvania; Author, A Matter of Degrees

The Big Bang, that giant explosion of more than 13 billion years ago, provides the accepted description of our Universe's beginning. We can trace with exquisite precision what happened during the expansion and cooling that followed that cataclysm, but the presence of neutrinos in that earliest phase continues to elude direct experimental confirmation.

Neutrinos, once in thermal equilibrium, were supposedly freed from their bonds to other particles about two seconds after the Big Bang. Since then they should have been roaming undisturbed through intergalactic space, some 200 of them in every cubic centimeter of our Universe, altogether a billion of them for every single atom. Their presence is noted indirectly in the Universe's expansion. However, though they are presumably by far the most numerous type of material particle in existence, not a single one of those primordial neutrinos has ever been detected. It is not for want of trying, but the necessary experiments are almost unimaginably difficult. And yet those neutrinos must be there. If they are not, our whole picture of the early Universe will have to be totally reconfigured.

Wolfgang Pauli's original 1930 proposal of the neutrino's existence was so daring he didn't publish it. Enrico Fermi's brilliant 1934 theory of how neutrinos are produced in nuclear events was rejected for publication byNature magazine as being too speculative. In the 1950s neutrinos were detected in nuclear reactors and soon afterwards in particle accelerators. Starting in the 1960s, an experimental tour de force revealed their existence in the solar core. Finally, in1987 a ten second burst of neutrinos was observed radiating outward from a supernova collapse that had occurred almost 200,000 years ago. When they reached the Earth and were observed, one prominent physicist quipped that extra-solar neutrino astronomy "had gone in ten seconds from science fiction to science fact". These are some of the milestones of 20th century neutrino physics.

In the 21st century we eagerly await another one, the observation of neutrinos produced in the first seconds after the Big Bang. We have been able to identify them, infer their presence, but will we be able to actually see these minute and elusive particles? They must be everywhere around us, even though we still cannot prove it.

Piet Hut professor of astrophysics at the Institute for Advanced Study, in Princeton Astrophysicist, Institute of Advanced Study

Science, like most human activities, is based on a belief, namely the assumption that nature is understandable.

If we are faced with a puzzling experimental result, we first try harder to understand it with currently available theory, using more clever ways to apply that theory. If that really doesn't work, we try to improve or perhaps even replace the theory. We never conclude that a not-yet understood result is in principle un-understandable.

While some philosophers might draw a different conclusion—see the contribution by Nicholas Humphrey—as a scientist I strongly believe that Nature is understandable. And such a belief can neither be proved nor disproved.

Note: undoubtedly, the notion of what counts as "understandable" will continue to change. What physicists consider to be understandable now is very different from what had been regarded as such one hundred years ago. For example, quantum mechanics tells us that repeating the same experiment will give different results. The discovery of quantum mechanics led us to relax the rigid requirement of a deterministic objective reality to a statistical agreement with a not fully determinable reality. Although at first sight such a restriction might seem to limit our understanding, we in fact have gained a far deeper understanding of matter through the use of quantum mechanics than we could possibly have obtained using only classical mechanics.

Martin Seligman Professor and Director, Positive Psychology Center, University of Pennsylvania; Author, Flourish Psychologist, University of Pennsylvania, Author, Authentic Happiness

The "rotten-to-the-core" assumption about human nature espoused so widely in the social sciences and the humanities is wrong. This premise has its origins in the religious dogma of original sin and was dragged into the secular twentieth century by Freud, reinforced by two world wars, the Great Depression, the cold war, and genocides too numerous to list. The premise holds that virtue, nobility, meaning, and positive human motivation generally are reducible to, parasitic upon, and compensations for what is really authentic about human nature: selfishness, greed, indifference, corruption and savagery. The only reason that I am sitting in front of this computer typing away rather than running out to rape and kill is that I am "compensated," zipped up, and successfully defending myself against these fundamental underlying impulses.

In spite of its widespread acceptance in the religious and academic world, there is not a shred of evidence, not an iota of data, which compels us to believe that nobility and virtue are somehow derived from negative motivation. On the contrary, I believe that evolution has favored both positive and negative traits, and many niches have selected for morality, co-operation, altruism, and goodness, just as many have also selected for murder, theft, self-seeking, and terrorism.

More plausible than the rotten-to-the-core theory of human nature is the dual aspect theory that the strengths and the virtues are just as basic to human nature as the negative traits: that negative motivation and emotion have been selected for by zero-sum-game survival struggles, while virtue and positive emotion have been selected for by positive sum game sexual selection. These two overarching systems sit side by side in our central nervous system ready to be activated by privation and thwarting, on the one hand, or by abundance and the prospect of success, on the other.

Stephen M. Kosslyn Founding Dean, Minerva Schools at the Keck Graduate Institute Psychologist, Harvard University; Author, Wet Mind

Mental processes: An out-of-body existence?

These days, it seems obvious that the mind arises from the b rain (not the heart, liver, or some other organ). In fact, I personally have gone so far as to claim that "the mind is what the brain does." But this notion does not preclude an unconventional idea: Your mind may arise not simply from your own brain, but in part from the brains of other people.

Let me explain. This idea rests on three key observations.

The first is that our brains are limited, and so we use crutches to supplement and extend our abilities. For example, try to multiply 756 by 312 in your head. Difficult, right? You would be happier with a pencil and piece of paper—or, better yet, an electronic calculator. These devices serve as prosthetic systems, making up for cognitive deficiencies (just as a wooden leg would make up for a physical deficiency).

The second observation is that the major prosthetic system we use is other people. We set up what I call "Social Prosthetic Systems" (SPSs), in which we rely on others to extend our reasoning abilities and to help us regulate and constructively employ our emotions. A good marriage may arise in part because two people can serve as effective SPSs for each other.

The third observation is that a key element of serving as a SPS is learning how best to help someone. Others who function as your SPSs adapt to your particular needs, desires and predilections. And the act of learning changes the brain. By becoming your SPS, a person literally lends you part of his or her brain!

In short, parts of other people's brains come to serve as extensions of your own brain. And if the mind is "what the brain does," then your mind in fact arises from the activity of not only your own brain, but those of your SPSs.

There are many implications of these ideas, ranging from reasons why we behave in certain ways toward others to foundations of ethics and even to religion. In fact, one could even argue that when your body dies, part of your mind may survive. But before getting into such dark and dusty corners, it would be nice to have firm footing—to collect evidence that these speculations are in fact worth taking seriously.

Clifford Pickover Author, The Math Book, The Physics Book, and The Medical Book trilogy Computer scientist, IBM's T. J. Watson Research Center; Author, Calculus and Pizza

If we believe that consciousness is the result of patterns of neurons in the brain, our thoughts, emotions, and memories could be replicated in moving assemblies of Tinkertoys. The Tinkertoy minds would have to be very big to represent the complexity of our minds, but it nevertheless could be done, in the same way people have made computers out of 10,000 Tinkertoys. In principle, our minds could be hypostatized in patterns of twigs, in the movements of leaves, or in the flocking of birds. The philosopher and mathematician Gottfried Leibniz liked to imagine a machine capable of conscious experiences and perceptions. He said that even if this machine were as big as a mill and we could explore inside, we would find "nothing but pieces which push one against the other and never anything to account for a perception."

If our thoughts and consciousness do not depend on the actual substances in our brains but rather on the structures, patterns, and relationships between parts, then Tinkertoy minds could think. If you could make a copy of your brain with the same structure but using different materials, the copy would think it was you. This seemingly materialistic approach to mind does not diminish the hope of an afterlife, of transcendence, of communion with entities from parallel universes, or even of God. Even Tinkertoy minds can dream, seek salvation and bliss—and pray.

Alison Gopnik Psychologist, UC, Berkeley; Author, The Gardener and the Carpenter Psychologist, UC-Berkeley; Coauthor, The Scientist In the Crib

I believe, but cannot prove, that babies and young children are actually more conscious, more vividly aware of their external world and internal life, than adults are. I believe this because there is strong evidence for a functional trade-off with development. Young children are much better than adults at learning new things and flexibly changing what they think about the world. On the other hand, they are much worse at using their knowledge to act in a swift, efficient and automatic way. They can learn three languages at once but they can't tie their shoelaces. 

This trade-off makes sense from an evolutionary perspective. Our species relies more on learning than any other, and has a longer childhood than any other. Human childhood is a protected period in which we are free to learn without being forced to act. There is even some neurological evidence for this. Young children actually have substantially more neural connections than adults—more potential to put different kinds of information together. With experience, some connections are strengthened and many others disappear entirely. As the neuroscientists say, we gain conductive efficiency but lose plasticity.

What does this have to do with consciousness? Consider the experiences we adults associate with these two kinds of functions. When we know how to do something really well and efficiently, we typically lose, or at least, reduce, our conscious awareness of that action. We literally don't see the familiar houses and streets on the well-worn route home, although, of course, in some functional sense we must be visually taking them in. In contrast, as adults when we are faced with the unfamiliar, when we fall in love with someone new, or when we travel to a new place, our consciousness of what is around us and inside us suddenly becomes far more vivid and intense. In fact, we are willing to expend lots of money, and lots of emotional energy, for those few intensely alive days in Paris or Beijing that we will remember long after months of everyday life have vanished.

Similarly, as adults when we need to learn something new, say when we learn to skydive, or work out a new scientific idea, or even deal with a new computer, we become vividly, even painfully, conscious of what we are doing—we need, as we say, to pay attention. As we become expert we need less and less attention, and we experience the actual movements and thoughts and keystrokes less and less. We sometimes say that adults are better at paying attention than children, but really we mean just the opposite. Adults are better at not paying attention. They're better at screening out everything else and restricting their consciousness to a single focus. Again there is a certain amount of brain evidence for this. Some brain areas, like the dorsolateral prefrontal cortex, consistently light up for adults when they are deeply engaged in learning something new. But for more everyday tasks, these areas light up much less. For children, though the pattern is different—these areas light up even for mundane tasks. 

I think that, for babies, every day is first love in Paris. Every wobbly step is skydiving, every game of hide and seek is Einstein in 1905.

The astute reader will note that this is just the opposite of what Dan Dennett believes but cannot prove. And this brings me to a second thing I believe but cannot prove. I believe that the problem of capital-C Consciousness will disappear in psychology just as the problem of Life disappeared in biology. Instead we'll develop much more complex, fine-grained and theoretically driven accounts of the connections between particular types of phenomenological experience and particular functional and neurological phenomena. The vividness and intensity of our attentive awareness, for example, may be completely divorced from our experience of a constant first-person I. Babies may be more conscious in one way and less in the other. The consciousness of pain may be entirely different from the consciousness of red which may be entirely different from the babbling stream of Joyce and Woolf.

Joseph LeDoux Neuroscientist, New York University; Author, Anxious Neuroscientist, New York University; Author, The Synaptic Self

For me, this is an easy question. I believe that animals have feelings and other states of consciousness, but neither I, nor anyone else, has been able to prove it. We can't even prove that other people are conscious, much less other animals. In the case of other people, though, we at least can have a little confidence since all people have brains with the same basic configurations. But as soon as we turn to other species and start asking questions about feelings, and consciousness in general, we are in risky territory because the hardware is different.

When a rat is in danger, it does things that many other animals do. That is, it either freezes, runs away or fights back. People pretty much do the same things. Some scientists say that because a rat and a person act the same in similar situations, they have the same kinds of subjective experiences. I don't think we can really say this.

There are two aspects of brain hardware that make it difficult for us to generalize from our personal subjective experiences to the experiences of other animals. One is the fact that the circuits most often associated with human consciousness involve the lateral prefrontal cortex (via its role in working memory and executive control functions). This broad zone is much more highly developed in people than in other primates, and whether it exists at all in non-primates is questionable. So certainly for those aspects of consciousness that depend on the prefrontal cortex, including aspects that allow us to know who we are and to make plans and decisions, there is reason to believe that even other primates might be different than people. The other aspect of the brain that differs dramatically is that humans have natural language. Because so much of human experience is tied up with language, consciousness is often said to depend on language. If so, then most other animals are ruled out of the consciousness game. But even if consciousness doesn't depend on language, language certainly changes consciousness so that whatever consciousness another animal has it is likely to differ from most of our states of consciousness.

For these reasons, I think it is hard to know what consciousness might be like in another animal. If we can't measure it (because it is internal and subjective) and can't use our own experience to frame questions about it (because the hardware that makes it possible is different), it become difficult to study.

Most of what I have said applies mainly to the content of conscious experience. But there is another aspect of consciousness that is less problematic scientifically. It is possible to study the processes that make consciousness possible even if we can't study the content of consciousness in other animals. This is exactly what is done in studies of working memory in non-human primates. One approach by that has had some success in the area of conscious content in non-human primates has focused on a limited kind of consciousness, visual awareness. But this approach, by Koch and Crick, mainly gets at the neural correlates of consciousness rather than the causal mechanisms. The correlates and the mechanisms may be the same, but they may not. Interestingly, this approach also emphasizes the importance of prefrontal cortex in making visual awareness possible.

So what about feelings? My view is that a feeling is what happens when an emotion system, like the fear system, is active in a brain that can be aware of its own activities. That is, what we call "fear" is the mental state that we are in when the activity of the defense system of the brain (or the consequences of its activity, such as bodily responses) is what is occupying working memory. Viewed this way, feelings are strongly tied to those areas of the cortex that are fairly unique to primates and especially well developed in people. When you add natural language to the brain, in addition to getting fairly basic feelings you also get fine gradations due to the ability to use words and grammar to discriminate and categorize states and to attribute them not just to ourselves but to others.

There are other views about feelings. Damasio argues that feelings are due to more primitive activity in body sensing areas of the cortex and brainstem. Pankseep has a similar view, though he focuses more on the brainstem. Because this network has not changed much in the course of human evolution, it could therefore be involved in feelings that are shared across species. I don't object to this on theoretical grounds, but I don't think it can be proven because feelings can't be measured in other animals. Pankseep argues that if it looks like fear in rats and people, it probably feels like fear in both species. But how do you know that rats and people feel the same when they behave the same? A cockroach will escape from danger--does it, too, feel fear as it runs away? I don't think behavioral similarity is sufficient grounds for proving experiential similarity. Neural similarity helps—rats and people have similar brainstems, and a roach doesn't even have a brain. But is the brainstem responsible for feelings? Even if it were proven in people, how would you prove it in a rat?

So now we're back where we started. I think rats and other mammals, and maybe even roaches (who knows?), have feelings. But I don't know how to prove it. And because I have reason to think that their feelings might be different than ours, I prefer to study emotional behavior in rats rather than emotional feelings. I study rats because you can make progress at the neural level, provided that the thing you measure is the same in rats and people. I wouldn't study language and consciousness in rats, so I don't study feelings either, because I don't know that they exist. I may be accused of being short-sighted for this, but I'd rather make progress on something I can study in rats than beat my head against the consciousness wall in these creatures.

There's lots to learn about emotion through rats that can help people with emotional disorders. And there's lots we can learn about feelings from studying humans, especially now that we have powerful function imaging techniques. I'm not a radical behaviorist. I'm just a practical emotionalist.

Susan Blackmore Psychologist; Author, Consciousness: An Introduction Psychologist, Visiting Lecturer, University of the West of England, Bristol; Author The Meme Machine

It is possible to live happily and morally without believing in free will. As Samuel Johnson said "All theory is against the freedom of the will; all experience is for it." With recent developments in neuroscience and theories of consciousness, theory is even more against it than it was in his time, more than 200 years ago. So I long ago set about systematically changing the experience. I now have no feeling of acting with free will, although the feeling took many years to ebb away.

But what happens? People say I'm lying! They say it's impossible and so I must be deluding myself to preserve my theory. And what can I do or say to challenge them? I have no idea—other than to suggest that other people try the exercise, demanding as it is.

When the feeling is gone, decisions just happen with no sense of anyone making them, but then a new question arises—will the decisions be morally acceptable? Here I have made a great leap of faith (or the memes and genes and world have done so). It seems that when people throw out the illusion of an inner self who acts, as many mystics and Buddhist practitioners have done, they generally do behave in ways that we think of as moral or good. So perhaps giving up free will is not as dangerous as it sounds—but this too I cannot prove.

As for giving up the sense of an inner conscious self altogether—this is very much harder. I just keep on seeming to exist. But though I cannot prove it—I think it is true that I don't.

Steven Pinker Johnstone Family Professor, Department of Psychology; Harvard University; Author, The Sense of Style Psychologist, Harvard University; Author, The Blank Slate

In 1974, Marvin Minsky wrote that "there is room in the anatomy and genetics of the brain for much more mechanism than anyone today is prepared to propose." Today, many advocates of evolutionary and domain-specific psychology are in fact willing to propose the richness of mechanism that Minsky called for thirty years ago. For example, I believe that the mind is organized into cognitive systems specialized for reasoning about object, space, numbers, living things, and other minds; that we are equipped with emotions triggered by other people (sympathy, guilt, anger, gratitude) and by the physical world (fear, disgust, awe); that we have different ways for thinking and feeling about people in different kinds of relationships to us (parents, siblings, other kin, friends, spouses, lovers, allies, rivals, enemies); and several peripheral drivers for communicating with others (language, gesture, facial expression).

When I say I believe this but cannot prove it, I don't mean that it's a matter of raw faith or even an idiosyncratic hunch. In each case I can provide reasons for my belief, both empirical and theoretical. But I certainly can't prove it, or even demonstrate it in the way that molecular biologists demonstrate their claims, namely in a form so persuasive that skeptics can't reasonably attack it, and a consensus is rapidly achieved. The idea of a richly endowed human nature is still unpersuasive to many reasonable people, who often point to certain aspects of neuroanatomy, genetics, and evolution that appear to speak against it. I believe, but cannot prove, that these objections will be met as the sciences progress.

At the level of neuroanatomy and neurophysiology, critics have pointed to the apparent homogeneity of the cerebral cortex and of the seeming interchangeability of cortical tissue in experiments in which patches of cortex are rewired or transplanted in animals. I believe that the homogeneity is an illusion, owing to the fact that the brain is a system for information processing. Just as all books look the same to someone who does not understand the language in which they are written (since they are all composed of different arrangements of the same alphanumeric characters), and the DVD's of all movies look the same under a microscope, the cortex may look homogeneous to the eye but nonetheless contain different patterns of connectivity and synaptic biases that allow it to compute very different functions. I believe this these differences will be revealed in different patterns of gene expression in the developing cortex. I also believe that the apparent interchangeability of cortex occurs only in early stages of sensory systems that happen to have similar computational demands, such as isolating sharp signal transitions in time and space.

At the level of genetics, critics have pointed to the small number of genes in the human genome (now thought to be less than 25,000) and to their similarity to those of other animals. I believe that geneticists will find that there is a large store of information in the noncoding regions of the genome (the so-called junk DNA), whose size, spacing, and composition could have large effects on how genes are expressed. That is, the genes themselves may code largely for the meat and juices of the organism, which are pretty much the same across species, whereas how they are sculpted into brain circuits may depend on a much larger body of genetic information. I also believe that many examples of what we call "the same genes" in different species may differ in tiny ways at the sequence level that have large consequences for how the organism is put together.

And at the level of evolution, critics have pointed to how difficult it is to establish the adaptive function of a psychological trait. I believe this will change as we come to understand the genetic basis of psychological traits in more detail. New techniques in genomic analysis, which look for statistical fingerprints of selection in the genome, will show that many genes involved in cognition and emotion were specifically selected for in the primate, and in many cases the human, lineage.

Neil Gershenfeld Physicist, Director, MIT's Center for Bits and Atoms; Co-author, Designing Reality Physicist, MIT; Author, When Things Start to Think

What do you believe is true even though you cannot prove it?


The enterprise that employs me, seeking to understand and apply insight into how the world works, is ultimately based on the belief that this is a good thing to do. But it's something of a leap of faith to believe that that will leave the world a better place—the evidence to date is mixed for technical advances monotonically mapping onto human advances.

Naturally, this question has a technical spin for me. My current passion is the creation of tools for personal fabrication based on additive digital assembly, so that the uses of advanced technologies can be defined by their users. It's still no more than an assumption that that will lead to more good things than bad things being made, but, like the accumulated experience that democracy works better than monarchy, I have more faith in a future based on widespread access to the means for invention than one based on technocracy.

Keith Devlin Mathematician; Executive Director, H-STAR Institute, Stanford; Author, The Man of Numbers: Fibonacci's Arithmetic Revolution Mathematician, Stanford University; Author, The Millennium Problems

Before we can answer this question we need to agree what we mean by proof. (This is one of the reasons why its good to have mathematicians around. We like to begin by giving precise definitions of what we are going to talk about, a pedantic tendency that sometimes drives our physicist and engineering colleagues crazy.) For instance, following Descartes, I can prove to myself that I exist, but I can't prove it to anyone else. Even to those who know me well there is always the possibility, however remote, that I am merely a figment of their imagination. If it's rock solid certainty you want from a proof, there's almost nothing beyond our own existence (whatever that means and whatever we exist as) that we can prove to ourselves, and nothing at all we can prove to anyone else. 

Mathematical proof is generally regarded as the most certain form of proof there is, and in the days when Euclid was writing his great geometry textElements that was surely true in an ideal sense. But many of the proofs of geometric theorems Euclid gave were subsequently found out to be incorrect—David Hilbert corrected many of them in the late nineteenth century, after centuries of mathematicians had believed them and passed them on to their students—so even in the case of a ten line proof in geometry it can be hard to tell right from wrong.

When you look at some of the proofs that have been developed in the last fifty years or so, using incredibly complicated reasoning that can stretch into hundreds of pages or more, certainty is even harder to maintain. Most mathematicians (including me) believe that Andrew Wiles proved Fermat's Last Theorem in 1994, but did he really? (I believe it because the experts in that branch of mathematics tell me they do.)

In late 2002, the Russian mathematician Grigori Perelman posted on the Internet what he claimed was an outline for a proof of the Poincare Conjecture, a famous, century old problem of the branch of mathematics known as topology. After examining the argument for two years now, mathematicians are still unsure whether it is right or not. (They think it "probably is.")

Or consider Thomas Hales, who has been waiting for six years to hear if the mathematical community accepts his 1998 proof of astronomer Johannes Keplers 360-year-old conjecture that the most efficient way to pack equal sized spheres (such as cannonballs on a ship, which is how the question arose) is to stack them in the familiar pyramid-like fashion that greengrocers use to stack oranges on a counter. After examining Hales' argument (part of which was carried out by computer) for five years, in spring of 2003 a panel of world experts declared that, whereas they had not found any irreparable error in the proof, they were still not sure it was correct.

With the idea of proof so shaky—in practice—even in mathematics,answering this year's Edge question becomes a tricky business. The best we can do is come up with something that we believe but cannot prove to our own satisfaction. Others will accept or reject what we say depending on how much credence they give us as a scientist, philosopher, or whatever, generally basing that decision on our scientific reputation and record of previous work. At times it can be hard to avoid the whole thing degenerating into a slanging match. For instance, I happen to believe, firmly, that staples of popular-science-books and breathless TV-specials such as ESP and morphic resonance are complete nonsense, but I can't prove they are false. (Nor, despite their repeated claims to the contrary, have the proponents of those crackpot theories proved they are true, or even worth serious study, and if they want the scientific community to take them seriously then the onus if very much on them to make a strong case, which they have so far failed to do.)

Once you recognize that proof is, in practical terms, an unachievable ideal, even the old mathematicians standby of GÏdel's Incompleteness Theorem (which on first blush would allow me to answer the Edge question with a statement of my belief that arithmetic is free of internal contradictions) is no longer available. GÏdel's theorem showed that you cannot prove an axiomatically based theory like arithmetic is free of contradiction within that theory itself. But that doesn't mean you can't prove it in some larger, richer theory. In fact, in the standard axiomatic set theory, you can prove arithmetic is free of contradictions. And personally, I buy that proof. For me, as a living, human mathematician, the consistency of arithmetic has beenproved—to my complete satisfaction.

So to answer the Edge question, you have to take a common sense approach to proof—in this case proof being, I suppose, an argument that would convince the intelligent, professionally skeptical, trained expert in the appropriate field. In that spirit, I could give any number of specific mathematical problems that I believe are true but cannot prove, starting with the famous Riemann Hypothesis. But I think I can be of more use by using my mathematician's perspective to point out the uncertainties in the idea of proof. Which I believe (but cannot prove) I have.

Janna Levin Professor of Physics and Astronomy, Barnard College of Columbia University; Author, Black Hole Blues and Other Songs from Outer Space Physicist, Columbia University; Author, How The Universe Got Its Spots

I believe there is an external reality and you are not all figments of my imagination. My friend asks me through the steam he blows off the surface of his coffee, how I can trust the laws of physics back to the origins of the universe. I ask him how he can trust the laws of physics down to his cup of coffee. He shows every confidence that the scalding liquid will not spontaneously defy gravity and fly up in his eyes. He lives with this confidence born of his empirical experience of the world. His experiments with gravity, heat, and light began in childhood when he palpated the world to test its materials. Now he has a refined and well-developed theory of physics, whether expressed in equations or not.

I simultaneously believe more and less than he does. It is rational to believe what all of my empirical and logical tests of the world confirm—that there is a reality that exists independent of me. That the coffee will not fly upwards. But it is a belief nonetheless. Once I've gone that far, why stop at the perimeter of mundane experience? Just as we can test the temperature of a hot beverage with a tongue, or a thermometer, we can test the temperature of the primordial light left over from the big bang. One is no less real than the other simply because it is remarkable.

But how do I really know? If I measure the temperature of boiling water, all I really know is that mercury climbs a glass tube. Not even that, all I really know is that I see mercury climb a glass tube. But maybe the image in my mind's eye isn't real. Maybe nothing is real, not the mercury, not the glass, not the coffee, not my friend. They are all products of a florid imagination. There is no external reality, just me. Einstein? My creation. Picasso? My mind's forgery. But this solopsism is ugly and arrogant. How can I know that mathematics and the laws of physics can be reasoned down to the moment of creation of time, space, the entire universe? In the very same way that my friend believes in the reality of the second double cappuccino he orders. In formulating our beliefs, we are honest and critical and able to admit when we are wrong—and these are the cornerstones of truth.

When I leave the café, I believe the room of couches and tables is still on the block at 122nd Street, that it is still full of people, and that they haven't evaporated when my attention drifts away. But if I am wrong and there is no external reality, then not only is this essay my invention, but so is the web,edge.org, all of its participants and their ingenious ideas. And if you are reading this, I have created you too. But if I am wrong and there is no external reality, then maybe it is me who is a figment of your imagination and the cosmos outside your door is your magnificent creation.

Lawrence M. Krauss Theoretical Physicist; Foundation Professor, School of Earth and Space Exploration and Physics Department, ASU; Author, The Greatest Story Ever Told . . . So Far Physicist, Case Western Reserve University; Author, Atom

I believe our universe is not unique. As science has evolved, our place within the universe has continued to diminish in significance.

First it was felt that the Earth was the center of the universe, then that our Sun was the center, and so on. Ultimately we now realize that we are located at the edge of a random galaxy that is itself located nowhere special in a large, potentially infinite universe full of other galaxies. Moreover, we now know that even the stars and visible galaxies themselves are but an insignificant bit of visible pollution in a universe that is otherwise dominated by 'stuff' that doesn't shine.

Dark matter dominates the masses of galaxies and clusters by a factor of 10 compared to normal matter. And now we have discovered that even matter itself is almost insignificant. Instead empty space itself contains more than twice as much energy as that associated with all matter, including dark matter, in the universe. Further, as we ponder the origin of our universe, and the nature of the strange dark energy that dominates it, every plausible theory that I know of suggests that the Big Bang that created our visible universe was not unique. There are likely to be a large, and possibly infinite number of other universes out there, some of which may be experiencing Big Bangs at the current moment, and some of which may have already collapsed inward into Big Crunches. From a philosophical perspective this may be satisfying to some, who find a universe with a definite beginning but no definite end dissatisfying. In this case, in the 'metaverse', or 'multiverse' things may seem much more uniform in time.

At every instant there may be many universes being born, and others dying. But philosophy aside, the existence of many different causally disconnected universes—regions with which we will never ever be able to have direct communication, and thus which will forever be out of reach of direct empirical verification—may have significant impacts on our understanding of our own universe. Their existence may help explain why our own universe has certain otherwise unexpected features, because in a metaverse with a possibly infinite number of different universes, which may themselves vary in their fundamental features, it could be that life like our own would evolve in only universes with a special set of characteristics.

Whether or not this anthropic type of argument is necessary to understand our universe—and I personally hope it isn't—I nevertheless find it satisfying to think that it is likely that not only are we not located in a particularly special place in our universe, but that our universe itself may be relatively insignificant on a larger cosmic scale. It represents perhaps the ultimate Copernican Revolution.

Leonard Susskind Felix Bloch Professor in Theoretical Physics, Stanford; Author, The Cosmic Landscape; The Black Hole Wars Physicist, Stanford University

Conversation With a Slow Student

Student: Hi Prof. I've got a problem. I decided to do a little probability experiment—you know, coin flipping—and check some of the stuff you taught us. But it didn't work.

Professor: Well I'm glad to hear that you're interested. What did you do?

Student: I flipped this coin 1,000 times. You remember, you taught us that the probability to flip heads is one half. I figured that meant that if I flip 1,000 times I ought to get 500 heads. But it didn't work. I got 513. What's wrong?

Professor: Yeah, but you forgot about the margin of error. If you flip a certain number of times then the margin of error is about the square root of the number of flips. For 1,000 flips the margin of error is about 30. So you were within the margin of error.

Student: Ah, now I get if. Every time I flip 1,000 times I will always get something between 970 and 1,030 heads. Every single time! Wow, now that's a fact I can count on.

Professor: No, no! What it means is that you will probably get between 970 and 1,030.

Student: You mean I could get 200 heads? Or 850 heads? Or even all heads?

Professor: Probably not.

Student: Maybe the problem is that I didn't make enough flips. Should I go home and try it 1,000,000 times? Will it work better?

Professor: Probably.

Student: Aw come on Prof. Tell me something I can trust. You keep telling me what probably means by giving me more probablies. Tell me what probability means without using the word probably.

Professor: Hmmm. Well how about this: It means I would be surprised if the answer were outside the margin of error.

Student: My god! You mean all that stuff you taught us about statistical mechanics and quantum mechanics and mathematical probability: all it means is that you'd personally be surprised if it didn't work?

Professor: Well, uh...

If I were to flip a coin a million times I'd be damn sure I wasn't going to get all heads. I'm not a betting man but I'd be so sure that I'd bet my life or my soul. I'd even go the whole way and bet a year's salary. I'm absolutely certain the laws of large numbers—probability theory—will work and protect me. All of science is based on it. But, I can't prove it and I don't really know why it works. That may be the reason why Einstein said, "God doesn't play dice." It probably is.