Sie sind auf Seite 1von 246

alisongopnik.

com

Alison Gopnik WSJ Columns

377-479 minutes

Alison Gopnik The Wall Street Journal


Columns

Mind & Matter, now once per month

Click on the title for a version (or on the date for The Wall
Street Journal link)*

How Early Do Cultural Differences Start? (11 Jul 2019)

The Explosive Evolution of Consciousness (5 Jun 2019)

[no May 2019 column]

Psychedelics as a Path to Social Learning (25 Apr 2019)

What AI Is Still Far From Figuring Out (20 Mar 2019)

Young Children Make Good Scientists (14 Feb 2019)

A Generational Divide in the Uncanny Valley (10 Jan 2019)

2018

For Gorillas, Being a Good Dad Is Sexy (30 Nov 2018)

The Cognitive Advantages of Growing Older (2 Nov 2018)


Imaginary Worlds of Childhood (20 Sep 2018)

Like Us, Whales May Be Smart Because They're Social (16


Aug 2018)

For Babies, Life May Be a Trip (18 Jul 2018)

Who's Most Afraid to Die? A Surprise (6 Jun 2018)

Curiosity Is a New Power in Artificial Intelligence (4 May


2018)

[no April 2018 column]

Grandparents: The Storytellers Who Bind Us (29 Mar 2018)

Are Babies Able to See What Others Feel? (22 Feb 2018)

What Teenagers Gain from Fine-Tuned Social Radar (18


Jan 2018)

2017

The Smart Butterfly's Guide to Reproduction (6 Dec 2017)

The Power of Pretending: What Would a Hero Do? (1 Nov


2017)

The Potential of Young Intellect, Rich or Poor (29 Sep 2017)

Do Men and Women Have Different Brains (25 Aug 2017)

Whales Have Complex Culture, Too (3 Aug 2017)

How to Get Old Brains to Think Like Young Ones (7 Jul


2017)

What the Blind See (and Don't) When Given Sight (8 Jun
2017)
How Much Do Toddlers Learn From Play? (11 May 2017)

The Science of 'I Was just Following Orders' (12 Apr 2017)

How Much Screen Time Is Safe for Teens? (17 Mar 2017)

When Children Beat Adults at Seeing the World (16 Feb


2017)

Flying High: Research Unveils Birds' Learning Power (18


Jan 2017)

2016

When Awe-Struck, We Feel Both Smaller and Larger (22


Dec 2016)

The Brain Machinery Behind Daydreaming (23 Nov 2016)

Babies Show a Clear Bias--To Learn New Things (26 Oct


2016)

Our Need to Make and Enforce Rules Starts Very Young (28
Sep 2016)

Should We Let Toddlers Play with Saws and Knives? (31


Aug 2016)

Want Babies to Learn from Video? Try Interactive (3 Aug


2016)

A Small Fix in Mind-Set Can Keep Students in School (16


Jun 2016)

Aliens Rate Earth: Skip the Primates, Come for the Crows
(18 May 2016)

The Psychopath, the Altruist and the Rest of Us (21 Apr


2016)
Young Mice, Like Children, Can Grow Up Too Fast (23 Mar
2016)

How Babies Know That Allies Can Mean Power (25 Feb
2016)

To Console a Vole: A Rodent Cares for Others (26 Jan


2016)

Science Is Stepping Up the Pace of Innovation (1 Jan 2016)

2015

Giving Thanks for the Innovation That Saves Babies (25


Nov 2015)

Who Was That Ghost? Science's Reassuring Reply (28 Oct


2015)

Is Our Identity in Intellect, Memory or Moral Character? (9


Sep 2015)

Babies Make Predictions, Too (12 Aug 2015)

Aggression in Children Makes Sense - Sometimes (16 July


2015)

Smarter Every Year? Mystery of the Rising IQs (27 May


2015)

Brains, Schools and a Vicious Cycle of Poverty (13 May


2015)

The Mystery of Loyalty, in Life and on 'The Americans' (1


May 2015)

How 1-Year-Olds Figure Out the World (15 Apr 2015)

How Children Develop the Idea of Free Will (1 Apr 2015)


How We Learn to Be Afraid of the Right Things (18 Mar
2015)

Learning From King Lear: The Saving Grace of Low Status


(4 Mar 2015)

The Smartest Questions to Ask About Intelligence (18 Feb


2015)

The Dangers of Believing that Talent Is Innate (4 Feb 2015)

What a Child Can Teach a Smart Computer (22 Jan 2015)

Why Digital-Movie Effects Still Can't Do a Human Face (8


Jan 2015)

2014

DNA and the Randomness of Genetic Problems (25 Nov


2014 - out of order)

How Children Get the Christmas Spirit (24 Dec 2014)

Who Wins When Smart Crows and Kids Match Wits? (10
Dec 2014)

How Humans Learn to Communicate with Their Eyes (19


Nov 2014)

A More Supportive World Can Work Wonders for the Aged


(5 Nov 2014)

What Sends Teens Toward Triumph or Tribulation (22 Oct


2014)

Campfires Helped Inspire Community Culture (8 Oct 2014)

Poverty's Vicious Cycle Can Affect Our Genes (24 Sept


2014)
Humans Naturally Follow Crowd Behavior (12 Sept 2014)

Even Children Get More Outraged at 'Them' Than at 'Us' (27


Aug 2014)

In Life, Who WIns, the Fox or the Hedgehog? (15 Aug 2014)

Do We Know What We See? (31 July 2014)

Why Is It So Hard for Us to Do Nothing? (18 July 2014)

A Toddler's Souffles Aren't Just Child's Play (3 July 2014)

For Poor Kids, New Proof That Early help Is Key (13 June
2014)

Rice, Wheat and the Values They Sow (30 May 2014)

What Made Us Human? Perhaps Adorable Babies (16 May


2014)

Grandmothers: The Behind-the-Scenes Key to Human


Culture? (2 May 2014)

See Jane Evolve: Picture Books Explain Darwin (18 Apr


2014)

Scientists Study Why Stories Exist (4 Apr 2014)

The Kid Who Wouldn't Let Go of 'The Device' (21 Mar 2014)

Why You're Not as Clever as a 4-Year-Old (7 Mar 2014)

Are Schools Asking to Drug Kids for Better Test Scores?


(21 Feb 2014)

The Psychedelic Road to Other Conscious States (7 Feb


2014)

Time to Retire the Simplicity of Nature vs. Nurture (24 Jan


2014)

The Surprising Probability Gurus Wearing Diapers (10 Jan


2014)

2013

What Children Really Think About Magic (28 Dec 2013)

Trial and Error in Toddlers and Scientists (14 Dec 2013)

Gratitude for the Cosmic Miracle of A Newborn Child (29


Nov 2013)

The Brain's Crowdsourcing Software (16 Nov 2013)

World Series Recap: May Baseball's Irrational Heart Keep


On Beating (2 Nov 2013)

Drugged-out Mice Offer Insight into the Growing Brain (4


Oct 2013)

Poverty Can Trump a Winning Hand of Genes (20 Sep


2013)

Is It Possible to Reason about Having a Child? (7 Sep 2013)

Even Young Children Adopt Arbitrary Rituals (24 Aug 2013)

The Gorilla Lurking in Our Consciousness (9 Aug 2013)

Does Evolution Want Us to Be Unhappy? (27 Jul 2013)

How to Get Children to Eat Veggies (13 Jul 2013)

What Makes Some Children More Resilient? (29 Jun 2013)

Wordsworth, The Child Psychologist (15 Jun 2013)

Zazes, Flurps and the Moral World of Kids (31 May 2013)
How Early Do We Learn Racial 'Us and Them'? (18 May
2013)

How the Brain Really Works (4 May 2013)

Culture Begets Marriage - Gay or Straight (21 Apr 2013)

[sneak peek]

For Innovation, Dodge the Prefrontal Police (5 Apr 2013)

Sleeping Like a Baby, Learning at Warp Speed (22 Mar


2013)

Why Are Our Kids Useless? Because We're Smart (8 Mar


2013)

HOW EARLY DO CULTURAL DIFFERENCES START

Do our culture and language shape the way we think? A


new paper in the Proceedings of the National Academy of
Sciences, by Caren Walker at the University of California at
San Diego, Alex Carstensen at Stanford and their
colleagues, tried to answer this ancient question. The
researchers discovered that very young Chinese and
American toddlers start out thinking about the world in
similar ways. But by the time they are 3 years old, they are
already showing differences based on their cultures.

Dr. Walker’s research took off from earlier work that she
and I did together at the University of California at Berkeley.
We wanted to know whether children could understand
abstract relationships such as “same” and “different.” We
showed children of various ages a machine that lights up
when you put a block of a certain color and shape on it.
Even toddlers can easily figure out that a green block
makes the machine go while a blue block doesn’t.

But what if the children saw that two objects that were the
same—say, two red square blocks—made the machine light
up, while two objects that were different didn’t? We showed
children this pattern and asked them to make the machine
light up, giving them a choice between a tray with two new
identical objects—say, two blue round blocks—or another
tray with two different objects.

At 18 months old, toddlers had no trouble figuring out that


the relationship between the blocks was the important
thing: They put the two similar objects on the machine. But
much to our surprise, older children did worse. Three-year-
olds had a hard time figuring out that the relationship
between the blocks was what mattered. The 3-year-olds
had actually learned that the individual objects were more
important than the relationships between them.

But these were all American children. Dr. Walker and her
colleagues repeated the machine experiment with children
in China and found a different result. The Chinese toddlers,
like the toddlers in the U.S., were really good at learning the
relationships; but so were the 3-year-olds. Unlike the
American children, they hadn’t developed a bias toward
objects.

In fact, when they saw an ambiguous pattern, which could


either be due to something about the individual objects or
something about the relationships between them, the
Chinese preschoolers actually preferred to focus on the
relationships. The American children focused on the
objects.

The toddlers in both cultures seemed to be equally open to


different ways of thinking. But by age 3, something about
their everyday experiences had already pushed them to
focus on different aspects of the world. Language could be
part of the answer: English emphasizes nouns much more
than Chinese does, which might affect the way speakers of
each language think about objects.

Of course, individuals and relationships are both important


in the social and physical worlds. And cultural conditioning
isn’t absolute: American adults can reason about
relationships, just as Chinese adults can reason about
objects. But the differences in focus and attention, in what
seems obvious and what seems unusual, may play out in
all sorts of subtle differences in the way we think, reason
and act. And those differences may start to emerge when
we are very young children.

THE EXPLOSIVE EVOLUTION OF CONSCIOUSNESS

Where does consciousness come from? When and how did


it evolve? The one person I’m sure is conscious is myself,
of course, and I’m willing to believe that my fellow human
beings, and familiar animals like cats and dogs, are
conscious too. But what about bumblebees and worms? Or
clams, oak trees and rocks?

Some philosophers identify consciousness with the


complex, reflective, self-conscious experiences that we
have when, say, we are sitting in an armchair and thinking
about consciousness. As a result, they argue that even
babies and animals aren’t really conscious. At the other
end of the spectrum, some philosophers have argued for
“pan-psychism,” the idea that consciousness is everywhere,
even in atoms.

Recently, however, a number of biologists and philosophers


have argued that consciousness was born from a specific
event in our evolutionary history: the Cambrian explosion. A
new book, “The Evolution of the Sensitive Soul” by the
Israeli biologists Simona Ginsburg and Eva Jablonka,
makes an extended case for this idea.

For around 100 million years, from about 635 to 542 million
years ago, the first large multicellular organisms emerged
on Earth. Biologists call this period the Ediacaran
Garden—a time when, around the globe, a rich variety of
strange creatures spent their lives attached to the ocean
floor, where they fed, reproduced and died without doing
very much in between. There were a few tiny slugs and
worms toward the end of this period, but most of the
creatures, such as the flat, frond-like, quilted Dickinsonia,
were unlike any plants or animals living today.

Then, quite suddenly by geological standards, most of


these creatures disappeared. Between 530 and 520 million
years ago, they were replaced by a remarkable proliferation
of animals who lived quite differently. These animals
started to move, to have brains and eyes, to seek out prey
and avoid predators. Some of the creatures in the fossil
record seem fantastic—like Anomolocaris, a three-foot-long
insectlike predator, and Opabinia, with its five eyes and
trunk-like proboscis ending in a grasping claw. But they
included the ancestors of all current species of animals,
from insects, crustaceans and mollusks to the earliest
vertebrates, the creatures who eventually turned into us.

But other children saw the opposite pattern—just one


example of the color rule, followed by four examples of the
shape rule. Those children rationally switched to the shape
rule: If the plaque showed a red square, they would choose
a blue square rather than a red circle to make the machine
go. In other words, the children acted like good scientists. If
there was more evidence for their current belief they held
on to it, but if there was more evidence against it then they
switched.

Of course, the children in this study had some advantages


that adults don’t. They could see the evidence with their
own eyes and they trusted the experimenter. Most of our
scientific evidence about how the world works—evidence
about climate change or vaccinations, for example—comes
to us in a much more roundabout way and depends on a
long chain of testimony and trust.

Applying our natural scientific reasoning abilities in these


contexts is more challenging, but there are hopeful signs. A
new paper in PNAS by Gordon Pennycook and David Rand
at Yale shows that ordinary people are surprisingly good at
rating how trustworthy sources of information are,
regardless of ideology. The authors suggest that social
media algorithms could incorporate these trust ratings,
which would help us to use our inborn rationality to make
better decisions in a complicated world.

PSYCHEDELICS AS A PATH TO SOCIAL LEARNING

How do psychedelic drugs work? And can psychedelic


experiences teach you something? People often say that
these experiences are important, revelatory, life-changing.
But how exactly does adding a chemical to your brain
affect your mind?

The renaissance of scientific psychedelic research may


help to answer these questions. A new study in the journal
Nature by Gul Dolen at Johns Hopkins University and her
colleagues explored how MDMA works in mice. MDMA,
also known as Ecstasy, is the illegal and sometimes very
dangerous “love drug” that fueled raves in the 1980s and is
still around today. Recent research, though, suggests that
MDMA may be effective in treating PTSD and anxiety, and
the FDA has approved further studies to explore these
possibilities. The new study shows exactly how MDMA
influences the brain, at least in mice: It restores early
openness to experience, especially social experience, and
so makes it easier for the mice to learn from new social
information.

In both mice and humans, different parts of the brain are


open to different kinds of information at different times.
Neuroscientists talk about “plasticity”—the ability of the
brain to change as a result of new experiences. Our brains
are more plastic in childhood and then become more rigid
as we age. In humans, the visual system has a “sensitive
period” in the first few years when it can be rewired by
experience—that is why it’s so important to correct babies’
vision problems. There is a different sensitive period for
language: Language learning gets noticeably harder at
puberty.

Similarly, Dr. Dolen found that there was a sensitive period


for social learning in mice. The mice spent time with other
mice in one colored cage and spent time alone in a
different-colored cage. The young mice would learn to
move toward the color that was associated with the social
experience, and this learning reached a peak in
adolescence (“Hey, let’s hit the red club where the cool kids
hang out!”). Normally, the adult mice stopped learning this
connection (“Who cares? I’d rather just stay home and
watch TV”).

The researchers showed that this was because the younger


mice had more plasticity in the nucleus accumbens, a part
of the brain that is involved in motivation and learning. But
after a dose of MDMA, the adult mice were able to learn
once more, and they continued to learn the link for several
weeks afterward.

Unlike other psychedelics, MDMA makes people feel


especially close to the other people around them. (Ravers
make “cuddle puddles,” a whole group of people locked in a
collective embrace.) The new study suggests that this has
something to do with the particular chemical profile of the
drug. The social plasticity effect depended on a
combination of two different “neurotransmitters”:
serotonin, which seems to be involved in plasticity and
psychedelic effects in general, and oxytocin, the “tend and
befriend” chemical that is particularly involved in social
closeness and trust. So MDMA seems to work by making
the brain more open in general but especially more open to
being with and learning about others.

This study, like an increasing number of others, suggests


that psychedelic chemicals can make the brain more open
to learning and change. What gets learned or changed,
though, depends on the particular chemical, the particular
input that reaches the brain, and the particular information
that reaches the mind.

A psychedelic experience might be just entertaining, or


even terrifying or destructive—certainly not something for
casual experimentation. But in the right therapeutic setting,
it might actually be revelatory.

WHAT AI IS STILL FAR FROM FIGURING OUT

Everybody’s talking about artificial intelligence. Some


people even argue that AI will lead, quite literally, to either
immortality or the end of the world. Neither of those
possibilities seems terribly likely, at least in the near future.
But there is still a remarkable amount of debate about just
what AI can do and what it means for all of us human
intelligences.

A new book called “Possible Minds: 25 Ways of Looking at


AI,” edited by John Brockman, includes a range of big-
picture essays about what AI can do and what it might
mean for the future. The authors include people who are
working in the trenches of computer science, like Anca
Dragan, who designs new kinds of AI-directed robots, and
Rodney Brooks, who invented the Roomba, a robot vacuum
cleaner. But it also includes philosophers like Daniel
Dennett, psychologists like Steven Pinker and even art
experts like the famous curator Hans Ulrich Obrist.

I wrote a chapter about why AI still can’t solve problems


that every 4-year-old can easily master. Although Deep
Mind’s Alpha Zero can beat a grand master at computer
chess, it would still bomb at Attie Chess—the version of the
game played by my 3-year-old grandson Atticus. In Attie
Chess, you throw all of the pieces into the wastebasket,
pick each one up, try to put them on the board and then
throw them all in the wastebasket again. This apparently
simple physical task is remarkably challenging even for the
most sophisticated robots.

But reading through all the chapters, I began to sense that


there’s a more profound way in which human intelligence is
different from artificial intelligence, and there’s another
reason why Attie Chess may be important.

The trick behind the recent advances in AI is that a human


specifies a particular objective for the machine. It might be
winning a chess game or distinguishing between pictures
of cats and dogs on the internet. But it might also be
something more important, like judging whether a work of
art deserves to be in a museum or a defendant deserves to
be in prison.
The basic technique is to give the computer millions of
examples of games, images or previous judgments and to
provide feedback. Which moves led to a high score? Which
pictures did people label as dogs? What did the curators or
judges decide in particular cases? The computer can then
use machine learning techniques to try to figure out how to
achieve the same objectives. In fact, machines have gotten
better and better at learning how to win games or match
human judgments. They often detect subtle statistical cues
in the data that humans can’t even understand.

But people also can decide to change their objectives. A


great judge can argue that slavery should be outlawed or
that homosexuality should no longer be illegal. A great
curator can make the case for an unprecedented new kind
of art, like Cubism or Abstract Expressionism, that is very
different from anything in the past. We invent brand new
games and play them in new ways. In fact, when children
play, they practice setting themselves new objectives, even
when, as in Attie Chess, those goals look pretty silly from
the adult perspective.

Indeed, the point of each new generation is to create new


objectives—new games, new categories and new
judgments. And yet, somehow, in a way that we don’t
understand at all, we don’t merely slide into relativism. We
can decide what is worth doing in a way that AI can’t.

Any new technology, from fire to Facebook, from internal


combustion to the internet, brings unforeseen dangers and
unintended consequences. Regulating and controlling
those technologies is one of the great tasks of each
generation, and there are no guarantees of success. In that
regard, we have more to fear from natural human stupidity
than artificial intelligence. But, so far at least, we are the
only creatures who can decide not only what we want but
whether we should want it.

YOUNG CHILDREN MAKE GOOD SCIENTISTS

We all know that it’s hard to get people to change their


minds, even when they should. Studies show that when
people see evidence that goes against their deeply
ingrained beliefs, they often just dig in more firmly: Climate
change deniers and anti-vaxxers are good examples. But
why? Are we just naturally resistant to new facts? Or are
our rational abilities distorted by biases and prejudices?

Of course, sometimes it can be perfectly rational to resist


changing your beliefs. It all depends on how much
evidence you have for those beliefs in the first place, and
how strongly you believe them. You shouldn’t overturn the
periodic table every time a high-school student blunders in
a chemistry lab and produces a weird result. In statistics,
new methods increasingly give us a precise way of
calculating the balance between old beliefs and new
evidence.

Over the past 15 years, my lab and others have shown that,
to a surprising extent, even very young children reason in
this way. The conventional wisdom is that young children
are irrational. They might stubbornly cling to their beliefs,
no matter how much evidence they get to the contrary, or
they might be irrationally prone to change their minds—
flitting from one idea to the next regardless of the facts.

In a new study in the journal Child Development, my


student Katie Kimura and I tested whether, on the contrary,
children can actually change their beliefs rationally. We
showed 4-year-old children a group of machines that lit up
when you put blocks on them. Each machine had a plaque
on the front with a colored shape on it. First, children saw
that the machine would light up if you put a block on it that
was the same color as the plaque, no matter what shape it
was. A red block would activate a red machine, a blue block
would make a blue machine go and so on. Children were
actually quite good at learning this color rule: If you showed
them a new yellow machine, they would choose a yellow
block to make it go.

But then, without telling the children, we changed the rule


so that the shape rather than the color made the machine
go. Some children saw the machine work on the color rule
four times and then saw one example of the shape rule.
They held on to their first belief and stubbornly continued
to pick the block with the same color as the machine.

But other children saw the opposite pattern—just one


example of the color rule, followed by four examples of the
shape rule. Those children rationally switched to the shape
rule: If the plaque showed a red square, they would choose
a blue square rather than a red circle to make the machine
go. In other words, the children acted like good scientists. If
there was more evidence for their current belief they held
on to it, but if there was more evidence against it then they
switched.

Of course, the children in this study had some advantages


that adults don’t. They could see the evidence with their
own eyes and they trusted the experimenter. Most of our
scientific evidence about how the world works—evidence
about climate change or vaccinations, for example—comes
to us in a much more roundabout way and depends on a
long chain of testimony and trust.

Applying our natural scientific reasoning abilities in these


contexts is more challenging, but there are hopeful signs. A
new paper in PNAS by Gordon Pennycook and David Rand
at Yale shows that ordinary people are surprisingly good at
rating how trustworthy sources of information are,
regardless of ideology. The authors suggest that social
media algorithms could incorporate these trust ratings,
which would help us to use our inborn rationality to make
better decisions in a complicated world.

A GENERATIONAL DIVIDE IN THE UNCANNY VALLEY

Over the holidays, my family confronted a profound


generational divide. My grandchildren became obsessed
with director Robert Zemeckis’s 2009 animated film “A
Christmas Carol,” playing it over and over. But the grown-
ups objected that the very realistic, almost-but-not-quite-
human figures just seemed creepy.

The movie is a classic example of the phenomenon known


as “the uncanny valley.” Up to a point, we prefer animated
characters who look like people, but when those characters
look too much like actual humans, they become weird and
unsettling. In a 2012 paper in the journal Cognition, Kurt
Gray of the University of North Carolina and Dan Wegner of
Harvard demonstrated the uncanny valley systematically.
They showed people images of three robots—one that
didn’t look human at all, one that looked like a cartoon and
one that was more realistic. Most people preferred the
cartoony robot and thought the realistic one was strange.

But where does the uncanny valley come from? And why
didn’t it bother my grandchildren in “A Christmas Carol”?
Some researchers have suggested that the phenomenon is
rooted in an innate tendency to avoid humans who are
abnormal in some way. But the uncanny valley might also
reflect our ideas about minds and brains. A realistic robot
looks as if it might have a mind, even though it isn’t a
human mind, and that is unsettling. In the Gray study, the
more people thought that the robot had thoughts and
feelings, the creepier it seemed.
Kimberly Brink and Henry Wellman of the University of
Michigan, along with Gray, designed a study to determine
whether children experience the uncanny valley. In a 2017
paper in the journal Child Development, they showed 240
children, ages 3 to 18, the same three robots that Gray
showed to adults. They asked the children how weird the
robots were and whether they could think and feel for
themselves. Surprisingly, until the children hit age 9, they
didn’t see anything creepy about the realistic robots. Like
my grandchildren, they were unperturbed by the almost
human.

The development of the uncanny valley in children tracked


their developing ideas about robots and minds. Younger
children had no trouble with the idea that the realistic robot
had a mind. In fact, in other studies, young children are
quite sympathetic to robots—they’ll try to comfort one that
is fallen or injured. But the older children had a more
complicated view. They began to feel that robots weren’t
the sort of things that should have minds, and this
contributed to their sense that the realistic robot was
creepy.

The uncanny valley turns out to be something we develop,


not something we’re born with. But this raises a possibility
that is itself uncanny. Right now, robots are very far from
actually having minds; it is remarkably difficult to get them
to do even simple things. But suppose a new generation of
children grows up with robots that actually do have minds,
or at least act as if they do. This study suggests that those
children may never experience an uncanny valley at all. In
fact, it is possible that the young children in the study are
already being influenced by the increasingly sophisticated
machines around them. My grandchildren regularly talk to
Alexa and make a point of saying “please” if she doesn’t
answer right away.
Long before there were robots, people feared the almost
human, from the medieval golem to Frankenstein’s
monster. Perhaps today’s children will lead the way in
broadening our sympathies to all sentient beings, even
artificial ones. I hope so, but I don’t think they’ll ever get me
to like the strange creatures in that Christmas movie.

FOR GORILLAS, BEING A GOOD DAD IS SEXY

He was tall and rugged, with piercing blue eyes, blond hair
and a magnificent jawline. And what was that slung across
his chest? A holster for his Walther PPK? When I saw what
the actor Daniel Craig—aka James Bond—was actually
toting, my heart skipped a beat. It was an elegant, high-tech
baby carrier, so that he could snuggle his baby daughter.

When a paparazzo recently snapped this photo of Mr. Craig,


an online kerfuffle broke out after one obtuse commentator
accused him of being “emasculated.” Now science has
come to Mr. Craig’s defense. A new study of gorillas in
Nature Scientific Reports, led by Stacy Rosenbaum and
colleagues at Northwestern University and the Dian Fossey
Fund, suggests that taking care of babies makes you
sexy—at least, if you’re a male gorilla.

The study began with a counterintuitive observation: Even


silverback gorillas, those stereotypically fearsome and
powerful apes, turn out to love babies. Adult male gorillas
consistently groom and even cuddle with little ones. And
the gorillas don’t care only about their own offspring;
they’re equally willing to hang out with other males’ babies.

The researchers analyzed the records of 23 male gorillas


that were part of a group living in the mountains of
Rwanda. From 2003 to 2004, observers recorded how
much time each male spent in close contact with an infant.
By 2014, about 100 babies had been born in the group, and
the researchers used DNA, collected from the gorillas’
feces, to work out how many babies each male had
fathered. Even when they controlled for other factors like
age and status, there turned out to be a strong correlation
between caring for children and sexual success. The males
who were most attentive to infants sired five times more
children than the least attentive. This suggests that
females may have been preferentially selecting the males
who cared for babies.

These results tell us something interesting about gorillas,


but they may also help answer a crucial puzzle about
human evolution. Human babies are much more helpless,
for a much longer time, than those of other species. That
long childhood is connected to our exceptionally large
brains and capacity for learning. It also means that we have
developed a much wider range of caregivers for those
helpless babies. In particular, human fathers help take care
of infants and they “pair bond” with mothers.

We take this for granted, but human fathers are actually


much more monogamous, and invest more in their babies,
than almost any other mammal, including our closest great
ape relatives. (Only 5% of mammal species exhibit pair-
bonding.) On the other hand, humans aren’t as exclusively
monogamous as some other animals—some birds, for
example. And human fathers are optional or voluntary
caregivers, as circumstances dictate. They don’t always
care for their babies, but when they do, they are just as
effective and invested as mothers.

Our male primate ancestors must have evolved from the


typical indifferent and promiscuous mammalian father into
a committed human dad. The gorillas may suggest an
evolutionary path that allowed this transformation to take
place. And a crucial part of that path may be that men have
a fondness for babies in general, whether or not they are
biologically related.

Mr. Craig was on to something: You don’t really need dry


martinis and Aston Martins to appeal to women. A nice
Baby Bjorn will do.

THE COGNITIVE ADVANTAGES OF GROWING OLDER

If, like me, you’re on the wrong side of sixty, you’ve probably
noticed those increasingly frequent and sinister “senior
moments.” What was I looking for when I came into the
kitchen? Did I already take out the trash? What’s old what’s-
his-name’s name again?

One possible reaction to aging is resignation: You’re just


past your expiration date. You may have heard that
centuries ago the average life expectancy was only around
40 years. So you might think that modern medicine and
nutrition are keeping us going past our evolutionary limit.
No wonder the machine starts to break down.

In fact, recent research suggests a very different picture.


The shorter average life expectancy of the past mainly
reflects the fact that many more children died young. If you
made it past childhood, however, you might well live into
your 60s or beyond. In today’s hunter-gatherer cultures,
whose way of life is closer to that of our prehistoric
ancestors, it’s fairly common for people to live into their
70s. That is in striking contrast to our closest primate
relatives, chimpanzees, who very rarely live past their 50s.

There seem to be uniquely human genetic adaptations that


keep us going into old age and help to guard against
cognitive decline. This suggests that the later decades of
our lives are there for a reason. Human beings are uniquely
cultural animals; we crucially depend on the discoveries of
earlier generations. And older people are well suited to
passing on their accumulated knowledge and wisdom to
the next generation.

Michael Gurven, an anthropologist at the University of


California, Santa Barbara, and his colleagues have been
studying aging among the Tsimane, a group in the Bolivian
Amazon. The Tsimane live in a way that is more like the
way we all lived in the past, through hunting, gathering and
small-scale farming of local foods, with relatively little
schooling or contact with markets and cities. Many
Tsimane are in their 60s or 70s, and some even make it to
their 80s.

In a 2017 paper in the journal Developmental Psychology,


Prof. Gurven and colleagues gave over 900 Tsimane people
a battery of cognitive tasks. Older members of the group
had a lot of trouble doing things like remembering a list of
new words. But the researchers also asked their subjects
to quickly name as many different kinds of fish or plants as
they could. This ability improved as the Tsimane got older,
peaking around age 40 and staying high even in old age.

Research on Western urban societies has produced similar


findings. This suggests that our cognitive strengths and
weaknesses change as we age, rather than just undergoing
a general decline. Things like short-term memory and
processing speed—what’s called “fluid intelligence”—peak
in our 20s and decline precipitously in older age. But
“crystallized intelligence”—how much we actually know,
and how well we can access that knowledge—improves up
to middle age, and then declines much more slowly, if at all.

So when I forget what happened yesterday but can tell my


grandchildren and students vivid stories about what
happened 40 years ago, I may not be falling apart after all.
Instead, I may be doing just what evolution intended.
IMAGINARY WORLDS OF CHILDHOOD

In 19th-century England, the Brontë children created


Gondal, an imaginary kingdom full of melodrama and
intrigue. Emily and Charlotte Brontë grew up to write the
great novels “Wuthering Heights” and “Jane Eyre.” The
fictional land of Narnia, chronicled by C.S. Lewis in a series
of classic 20th-century novels, grew out of Boxen, an
imaginary kingdom that Lewis shared with his brother
when they were children. And when the novelist Anne Perry
was growing up in New Zealand in the 1950s, she and
another girl created an imaginary kingdom called Borovnia
as part of an obsessive friendship that ended in murder—
the film “Heavenly Creatures” tells the story.

But what about Abixia? Abixia is an island nation on the


planet Rooark, with its own currency (the iinter, divided into
12 skilches), flag and national anthem. It’s inhabited by cat-
humans who wear flannel shirts and revere Swiss army
knives—the detailed description could go on for pages. And
it was created by a pair of perfectly ordinary Oregon 10-
year-olds.

Abixia is a “paracosm,” an extremely detailed and extensive


imaginary world with its own geography and history. The
psychologist Marjorie Taylor at the University of Oregon
and her colleagues discovered Abixia, and many other
worlds like it, by talking to children. Most of what we know
about paracosms comes from writers who described the
worlds they created when they were children. But in a paper
forthcoming in the journal Child Development, Prof. Taylor
shows that paracosms aren’t just the province of budding
novelists. Instead, they are a surprisingly common part of
childhood.

Prof. Taylor asked 169 children, ages eight to 12, whether


they had an imaginary world and what it was like. They
found that about 17 percent of the children had created
their own complicated universe. Often a group of children
would jointly create a world and maintain it, sometimes for
years, like the Brontë sisters or the Lewis brothers. And
grown-ups were not invited in.

Prof. Taylor also tried to find out what made the paracosm
creators special. They didn’t score any higher than other
children in terms of IQ, vocabulary, creativity or memory.
Interestingly, they scored worse on a test that measured
their ability to inhibit irrelevant thoughts. Focusing on the
stern and earnest real world may keep us from wandering
off into possible ones.

But the paracosm creators were better at telling stories,


and they were more likely to report that they also had an
imaginary companion. In earlier research, Prof. Taylor
found that around 66% of preschoolers have imaginary
companions; many paracosms began with older children
finding a home for their preschool imaginary friends.

Children with paracosms, like children with imaginary


companions, weren’t neurotic loners either, as popular
stereotypes might suggest. In fact, if anything, they were
more socially skillful than other children.

Why do imaginary worlds start to show up when children


are eight to 12 years old? Even when 10-year-olds don’t
create paracosms, they seem to have a special affinity for
them—think of all the young “Harry Potter” fanatics. And as
Prof. Taylor points out, paracosms seem to be linked to all
the private clubhouses, hidden rituals and secret societies
of middle childhood.

Prof. Taylor showed that preschoolers who create


imaginary friends are particularly good at understanding
other people’s minds—they are expert at everyday
psychology. For older children, the agenda seems to shift
to what we might call everyday sociology or geography.
Children may create alternative societies and countries in
their play as a way of learning how to navigate real ones in
adult life.

Of course, most of us leave those imaginary worlds behind


when we grow up—the magic portals close. The mystery
that remains is how great writers keep the doors open for
us all.

LIKE US, WHALES MAY BE SMART BECAUSE THEY'RE


SOCIAL

In recent weeks, an orca in the Pacific Northwest carried


her dead calf around with her for 17 days. It looked
remarkably like grief. Indeed, there is evidence that many
cetaceans—that is, whales, dolphins and porpoises—have
strong and complicated family and social ties. Some
species hunt cooperatively; others practice cooperative
child care, taking care of one another’s babies.

The orcas, in particular, develop cultural traditions. Some


groups hunt only seals, others eat only salmon. What’s
more, they are one of very few species with menopausal
grandmothers. Elderly orca females live well past their
fertility and pass on valuable information and traditions to
their children and grandchildren. Other cetaceans have
cultural traditions too: Humpback whales learn their
complex songs from other whales as they pass through the
breeding grounds in the southern Pacific Ocean.

We also know that cetaceans have large and complex


brains, even relative to their large bodies. Is there a
connection? Is high intelligence the result of social and
cultural complexity? This question is the focus of an
important recent study by researchers Kieran Fox, Michael
Muthukrishna and Suzanne Shultz, published last October
in the journal Nature Ecology and Evolution.

Their findings may shed light on human beings as well.


How and why did we get to be so smart? After all, in a
relatively short time, humans developed much larger brains
than their primate relatives, as well as powerful social and
cultural skills. We cooperate with each other—at least most
of the time—and our grandmothers, like grandmother orcas,
pass on knowledge from one generation to the next. Did we
become so smart because we are so social?

Humans evolved millions of years ago, so without a time


machine, it’s hard to find out what actually happened. A
clever alternate approach is to look at the cetaceans.
These animals are very different from us, and their
evolutionary history diverged from ours 95 million years
ago. But if there is an intrinsic relationship between
intelligence and social life, it should show up in whales and
dolphins as well as in humans.

Dr. Fox and colleagues compiled an extensive database,


recording as much information as they could find about 90
different species of cetaceans. They then looked at
whether there was a relationship between the social lives
of these animals and their brain size. They discovered that
species living in midsized social groups, with between two
and 50 members, had the largest brains, followed by
animals who lived in very large pods with hundreds of
animals. Solitary animals had the smallest brains. The
study also found a strong correlation between brain size
and social repertoire: Species who cooperated, cared for
each other’s young and passed on cultural traditions had
larger brains than those who did not.

Which came first, social complexity or larger brains? Dr.


Fox and colleagues conducted sophisticated statistical
analyses that suggested there was a feedback loop
between intelligence and social behavior. Living in a group
allowed for more complex social lives that rewarded bigger
brains. Animals who excelled at social interaction could
obtain more resources, which allowed them to develop yet
bigger brains. This kind of feedback loop might also
account for the explosively fast evolution of human beings.

Of course, intelligence is a relative term. The orcas’


cognitive sophistication and social abilities haven’t
preserved them from the ravages of environmental change.
The orca grieving her dead baby was, sadly, all too typical
of her endangered population. It remains to be seen
whether our human brains can do any better.

FOR BABIES, LIFE MAY BE A TRIP

What is it like to be a baby? Very young children can’t tell us


what their experiences are like, and none of us can
remember the beginnings of our lives. So it would seem
that we have no way of understanding baby consciousness,
or even of knowing if babies are conscious at all.

But some fascinating new neuroscience research is


changing that. It turns out that when adults dream or have
psychedelic experiences, their brains are functioning more
like children’s brains. It appears that the experience of
babies and young children is more like dreaming or tripping
than like our usual grown-up consciousness.

As we get older, the brain’s synapses—the connections


between neurons—start to change. The young brain is very
“plastic,” as neuroscientists say: Between birth and about
age 5, the brain easily makes new connections. A
preschooler’s brain has many more synapses than an adult
brain. Then comes a kind of tipping point. Some
connections, especially the ones that are used a lot,
become longer, stronger and more efficient. But many
other connections disappear—they are “pruned.”

What’s more, different areas of the brain are active in


children and adults. Parts of the back of the brain are
responsible for things like visual processing and
perception. These areas mature quite early and are active
even in infancy. By contrast, areas at the very front of the
brain, in the prefrontal cortex, aren’t completely mature
until after adolescence. The prefrontal cortex is the
executive office of the brain, responsible for focus, control
and long-term planning.

Like most adults, I spend most of my waking hours thinking


about getting things done. Scientists have discovered that
when we experience the world in this way, the brain sends
out signals along the established, stable, efficient networks
that we develop as adults. The prefrontal areas are
especially active and have a strong influence on the rest of
the brain. In short, when we are thinking like grown-ups, our
brains look very grown-up too.

But recently, neuroscientists have started to explore other


states of consciousness. In research published in Nature in
2017, Giulio Tononi of the University of Wisconsin and
colleagues looked at what happens when we dream. They
measured brain activity as people slept, waking them up at
regular intervals to ask whether they had been dreaming.
Then the scientists looked at what the brain had been
doing just before the sleepers woke up. When people
reported dreaming, parts of the back of the brain were
much more active—like the areas that are active in babies.
The prefrontal area, on the other hand, shuts down during
sleep.

A number of recent studies also explore the brain activity


that accompanies psychedelic experiences. A study
published last month in the journal Cell by David Olson of
the University of California, Davis, and colleagues looked at
how mind-altering chemicals affect synapses in rats. They
found that a wide range of psychedelic chemicals made
the brain more plastic, leading brain cells to grow more
connections. It’s as if the cells went back to their malleable,
infantile state.

In other words, the brains of dreamers and trippers looked


more like those of young children than those of focused,
hard-working adults. In a way, this makes sense. When you
have a dream or a psychedelic experience, it’s hard to focus
your attention or control your thoughts—which is why
reporting these experiences is notoriously difficult. At the
same time, when you have a vivid nightmare or a mind-
expanding experience, you certainly feel more conscious
than you are in boring, everyday life.

In the same way, an infant’s consciousness may be less


focused and controlled than an adult’s but more vivid and
immediate, combining perception, memory and
imagination. Being a baby may be both stranger and more
intense than we think.

WHO'S MOST AFRAID TO DIE? A SURPRISE

Why am I afraid to die? Maybe it’s the “I” in that sentence. It


seems that I have a single constant self—the same “I” who
peered out from my crib is now startled to see my aging
face in the mirror 60 years later. It’s my inner observer, chief
executive officer and autobiographer. It’s terrifying to think
that this “I” will just disappear.

But what if this “I” doesn’t actually exist? For more than
2,000 years, Buddhist philosophers have argued that the
self is an illusion, and many contemporary philosophers
and psychologists agree. Buddhists say this realization
should make us fear death less. The person I am now will
be replaced by the person I am in five years, anyway, so
why worry if she vanishes for good?

A recent paper in the journal Cognitive Science has an


unusual combination of authors. A philosopher, a scholar
of Buddhism, a social psychologist and a practicing
Tibetan Buddhist tried to find out whether believing in
Buddhism really does change how you feel about your
self—and about death.

The philosopher Shaun Nichols of the University of Arizona


and his fellow authors studied Christian and nonreligious
Americans, Hindus and both everyday Tibetan Buddhists
and Tibetan Buddhist monks. Among other questions, the
researchers asked participants about their sense of self—
for example, how strongly they believed they would be the
same five years from now. Religious and nonreligious
Americans had the strongest sense of self, and the
Buddhists, especially the monks, had the least.

In previous work, Prof. Nichols and other colleagues


showed that changing your sense of self really could make
you act differently. A weaker sense of self made you more
likely to be generous to others. The researchers in the new
study predicted that the Buddhists would be less frightened
of death.

The results were very surprising. Most participants


reported about the same degree of fear, whether or not
they believed in an afterlife. But the monks said that they
were much moreafraid of death than any other group did.

Why would this be? The Buddhist scholars themselves say


that merely knowing there is no self isn’t enough to get rid
of the feeling that the self is there. Neuroscience supports
this idea. Our sense of self, and the capacities like
autobiographical memory and long-term planning that go
with it, activates something called the default mode
network—a set of connected brain areas. Long-term
meditators have a less-active default mode network, but it
takes them years to break down the idea of the self, and
the monks in this study weren’t expert meditators.

Another factor in explaining why these monks were more


afraid of death might be that they were trained to think
constantly about mortality. The Buddha, perhaps
apocryphally, once said that his followers should think
about death with every breath. Maybe just ignoring death is
a better strategy.

There may be one more explanation for the results. Our


children and loved ones are an extension of who we are.
Their survival after we die is a profound consolation, even
for atheists. Monks give up those intimate attachments.

I once advised a young man at Google headquarters who


worried about mortality. He agreed that a wife and children
might help, but even finding a girlfriend was a lot of work.
He wanted a more efficient tech solution—like not dying.
But maybe the best way of conquering both death and the
self is to love somebody else.

CURIOSITY IS A NEW POWER IN ARTIFICIAL


INTELLIGENCE

Suddenly, computers can do things that seemed


impossible not so many years ago, from mastering the
game of Go and acing Atari games to translating text and
recognizing images. The secret is that these programs
learn from experience. The great artificial-intelligence
boom depends on learning, and children are the best
learners in the universe. So computer scientists are
starting to look to children for inspiration.
Everybody knows that young children are insatiably curious,
but I and other researchers in the field of cognitive
development, such as Laura Schulz at the Massachusetts
Institute of Technology, are beginning to show just how
that curiosity works. Taking off from these studies, the
computer scientists Deepak Pathak and Pulkit Agrawal
have worked with others at my school, the University of
California, Berkeley, to demonstrate that curiosity can help
computers to learn, too.

One of the most common ways that machines learn is


through reinforcement. The computer keeps track of when
a particular action leads to a reward—like a higher score in
a videogame or a winning position in Go. The machine tries
to repeat rewarding sequences of actions and to avoid
less-rewarding ones.

This technique still has trouble, however, with even simple


videogames such as Super Mario Bros.—a game that
children can master easily. One problem is that before you
can score, you need to figure out the basics of how Super
Mario works—the players jump over columns and hop over
walls. Simply trying to maximize your score won’t help you
learn these general principles. Instead, you have to go out
and explore the Super Mario universe.

Another problem with reinforcement learning is that


programs can get stuck trying the same successful
strategy over and over, instead of risking something new.
Most of the time, a new strategy won’t work better, but
occasionally it will turn out to be much more effective than
the tried-and-true one. You also need to explore to find that
out.

The same holds for real life, of course. When I get a new
smartphone, I use something like reinforcement learning: I
try to get it to do specific things that I’ve done many times
before, like call someone up. (How old school is that!) If the
call gets made, I stop there. When I give the phone to my
4-year-old granddaughter, she wildly swipes and pokes until
she has discovered functions that I didn’t even suspect
were there. But how can you build that kind of curiosity into
a computer?

Drs. Pathak and Agrawal have designed a program to use


curiosity in mastering videogames. It has two crucial
features to do just that. First, instead of just getting
rewards for a higher score, it’s also rewarded for being
wrong. The program tries to predict what the screen will
look like shortly after it makes a new move. If the
prediction is right, the program won’t make that move
again—it’s the same old same old. But if the prediction is
wrong, the program will make the move again, trying to get
more information. The machine is always driven to try new
things and explore possibilities.

Another feature of the new program is focus. It doesn’t pay


attention to every unexpected pixel anywhere on the
screen. Instead, it concentrates on the parts of the screen
that can influence its actions, like the columns or walls in
Mario’s path. Again, this is a lot like a child trying out every
new action she can think of with a toy and taking note of
what happens, even as she ignores mysterious things
happening in the grown-up world. The new program does
much better than the standard reinforcement-learning
algorithms.

Super Mario is still a very limited world compared with the


rich, unexpected, unpredictable real world that every 4-year-
old has to master. But if artificial intelligence is really going
to compete with natural intelligence, more childlike
insatiable curiosity may help.

GRANDPARENTS: THE STORYTELLERS WHO BIND US


‘Grandmom, I love Mommy most, of course, but you do tell
the best stories—especially Odysseus and the Cyclops.”
This authentic, if somewhat mixed, review from my
grandson may capture a profound fact about human
nature. A new study by Michael Gurven and colleagues
suggests that grandparents really may be designed to pass
on the great stories to their grandchildren.

One of the great puzzles of human evolution is why we


have such a distinctive “life history.” We have much longer
childhoods than any other primate, and we also live much
longer, well past the age when we can fully pull our weight.
While people in earlier generations had a shorter life
expectancy overall, partly because many died in childhood,
some humans have always lived into their 60s and 70s.
Researchers find it especially puzzling that female humans
have always lived well past menopause. Our closest
primate relatives die in their 50s.

Perhaps, some anthropologists speculate, grandparents


evolved to provide another source of food and care for all
those helpless children. I’ve written in these pages about
what the anthropologist Kristen Hawkes of the University of
Utah has called “the grandmother hypothesis.” Prof.
Hawkes found that in forager cultures, also known as
hunter-gatherer societies, the food that grandmothers
produce makes all the difference to their grandchildren’s
survival.

In contrast, Dr. Gurven and his colleagues focus more on


how human beings pass on information from one
generation to another. Before there was writing, human
storytelling was one of the most important kinds of cultural
transmission. Could grandparents have adapted to help
that process along?

Dr. Gurven’s team, writing earlier this year in the journal


Evolution and Human Behavior, studied the Tsimane in
Amazonia, a community in the Amazon River basin who live
as our ancestors once did. The Tsimane, more than 10,000
strong, gather and garden, hunt and fish, without much
involvement in the market economy. And they have a rich
tradition of stories and songs. They have myths about
Dojity and Micha, creators of the Earth, with the timeless
themes of murder, adultery and revenge. They also sing
melancholy songs about rejected love (the blues may be a
universal part of human nature).

During studies of the Tsimane spread over a number of


years, Dr. Gurven and his colleagues conducted interviews
to find out who told the most stories and sang the most
songs, who was considered the best in each category and
who the audience was for these performances. The
grandparents, people from age 60 to 80, most frequently
came out on top. While only 5% of Tsimane aged 15 to 29
told stories, 44% of those aged 60 to 80 did. And the elders’
most devoted audiences were their much younger kin.
When the researchers asked where the Tsimane had heard
stories, 84% of them came from older relatives other than
parents, particularly grandparents.

This preference for grandparents may be tied to the


anthropological concept of “alternate generations.” Parents
may be more likely to pass on the practical skills of using a
machete or avoiding a jaguar, while their own parents pass
on the big picture of how a community understands the
world and itself. Other studies have found that relations
between grandparents and grandchildren tend to be more
egalitarian than the “I told you not to do that” relationship
between so many parents and children.

Grandparents may play a less significant cultural role in a


complex, mobile modern society. Modern pop stars and TV
showrunners are more likely to be millennial than
menopausal. But when they get the chance, grandmas and
grandpas still do what they’ve done across the ages—
turning the attention of children to the very important
business of telling stories and singing songs.

ARE BABIES ABLE TO SEE WHAT OTHERS FEEL?

When adults look out at other people, we have what


psychologists and philosophers call a “theory of mind”—
that is, we think that the people around us have feelings,
emotions and beliefs just as we do. And we somehow
manage to read complex mental states in their sounds and
movements.

But what do babies see when they look out at other people?
They know so much less than we do. It’s not hard to
imagine that, as we coo and mug for them, they only see
strange bags of skin stuffed into clothes, with two restless
dots at the top and a hole underneath that opens and
closes.

Our sophisticated grown-up understanding of other people


develops through a long process of learning and
experience. But babies may have more of a head start than
we imagine. A new study by Andrew Meltzoff and his
colleagues at the University of Washington, published in
January in the journal Developmental Science, finds that
our connection to others starts very early.

Dr. Meltzoff has spent many years studying the way that
babies imitate the expressions and actions of other people.
Imitation suggests that babies do indeed connect their own
internal feelings to the behavior of others. In the new study,
the experimenters looked at how this ability is reflected in
baby’s brains.
Studies with adults have shown that some brain areas
activate in the same way when I do something as when I
see someone else do the same thing. But, of course, adults
have spent many years watching other people and
experiencing their own feelings. What about babies?

The trouble is that studying babies’ brains is really hard.


The typical adult studies use an FMRI (functional magnetic
resonance imaging) machine: Participants have to lie
perfectly still in a very noisy metal tube. Some studies have
used electroencepholography to measure baby’s brain
waves, but EEG only tells you when a baby’s brain is active.
It doesn’t say where that brain activity is taking place.

Dr. Meltzoff and colleagues have been pioneers in using a


new technique called magnetoencephalography with very
young babies. Babies sit in a contraption that’s like a cross
between a car seat and an old-fashioned helmet hairdryer.
The MEG machine measures very faint and subtle
magnetic signals that come from the brain, using
algorithms to correct for a wriggling baby’s movements.

In this study, the experimenters used MEG with 71 very


young babies—only seven months old. They recorded
signals from a part of the brain called the “somatosensory”
cortex. In adults, this brain area is a kind of map of your
body. Sensations in different body parts activate different
“somatosensory” areas, which correspond to the
arrangement of the body: Hand areas are near arm areas,
leg areas are near feet.

One group of babies felt a small puff of air on their hand or


their foot. The brain activation pattern for hands and feet
turned out to be different, just as it is for grown-ups. Then
the experimenters showed other babies a video of an adult
hand or foot that was touched by a rod. Surprisingly, seeing
a touch on someone else’s hand activated the same brain
area as feeling a touch on their own hand, though more
faintly. The same was true for feet.

These tiny babies already seemed to connect their own


bodily feelings to the feelings of others. These may be the
first steps toward a fully-fledged theory of mind. Even
babies, it turns out, don’t just see bags of skin. We seem to
be born with the ability to link our own minds and the
minds of others.

WHAT TEENAGERS GAIN FROM FINE-TUNED SOCIAL


RADAR

Figuring out why teenagers act the way they do is a


challenge for everybody, scientists as well as parents. For a
long time, society and science focused on adolescents’
problems, not on their strengths. There are good reasons
for this: Teens are at high risk for accidents, suicide, drug
use and overall trouble. The general perception was that
the teenage brain is defective in some way, limited by a
relatively undeveloped prefrontal cortex or overwhelmed by
hormones.

In the past few years, however, some scientists have begun


to think differently. They see adolescence as a time of both
risk and unusual capacities. It turns out that teens are
better than both younger children and adults at solving
some kinds of problems.

Teenagers seem to be especially adept at processing


social information, particularly social information about
their peers. From an evolutionary perspective, this makes a
lot of sense. Adolescence is, after all, when you leave the
shelter of your parents for the big world outside. A 2017
study by Maya Rosen and Katie McLaughlin of the
University of Washington and their colleagues, published in
the journal Developmental Science, is an important
contribution to this new way of thinking about teens.

Young children don’t have to be terribly sensitive to the way


that their parents feel, or even to how other children feel, in
order to survive. But for teenagers, figuring out other
people’s emotions becomes a crucial and challenging task.
Did he just smile because he likes me or because he’s
making fun of me? Did she just look away because she’s
shy or because she’s angry?

The researchers studied 54 children and teenagers, ranging


from 8 to 19 years old. They showed participants a series
of faces expressing different emotions. After one face
appeared, a new face would show up with either the same
expression or a different one. The participants had to say,
over the course of 100 trials, whether the new face’s
expression matched the old one or was different.

While this was going on, the researchers also used an FMRI
scanner to measure the participants’ brain activation. They
focused on “the salience network”—brain areas that light up
when something is important or relevant, particularly in the
social realm.

Early adolescents, aged 14 or so, showed more brain


activation when they saw an emotion mismatch than did
either younger children or young adults.

Is this sensitivity a good thing or bad? The researchers also


gave the participants a questionnaire to measure social
anxiety and social problems. They asked the children and
adolescents how well different sentences described them.
They would say, for example, whether “I’m lonely” or “I don’t
like to be with people I don’t know well” was a good
description of how they felt. In other studies, these self-
rating tests have turned out to be a robust measure of
social anxiety and adjustment in the real world.
The researchers found that the participants whose brains
reacted more strongly to the emotional expressions also
reported fewer social problems and anxiety—they were less
likely to say that they were lonely or avoided strangers.
Having a brain that pays special attention to other people’s
emotions allows you to understand and deal with those
emotions better, and so to improve your social life.

The brains of young teenagers were especially likely to


react to emotion in this way. And, of course, this period of
transition to adult life is especially challenging socially. The
young adolescents’ increased sensitivity appears to be an
advantage in figuring out their place in the world. Rather
than being defective, their brains functioned in a way that
helped them deal with the special challenges of teenage
life.

THE SMART BUTTERFLY'S GUIDE TO REPRODUCTION

We humans have an exceptionally long childhood, generally


bear just one child at a time and work hard to take care of
our children. Is this related to our equally distinctive large
brains and high intelligence? Biologists say that, by and
large, the smarter species of primates and even birds
mature later, have fewer babies, and invest more in those
babies than do the dimmer species.

“Intelligence” is defined, of course, from a human


perspective. Plenty of animals thrive and adapt without a
large brain or developed learning abilities.

But how far in the animal kingdom does this relationship


between learning and life history extend? Butterflies are
about as different from humans as could be—laying
hundreds of eggs, living for just a few weeks and
possessing brains no bigger than the tip of a pen. Even by
insect standards, they’re not very bright. A bug-loving
biology teacher I know perpetually complains that foolish
humans prefer pretty but vapid butterflies to her brilliant
pet cockroaches.

But entomologist Emilie Snell-Rood at the University of


Minnesota and colleagues have found a similar
relationship of learning to life-history in butterflies. The
insects that are smarter have a longer period of immaturity
and fewer babies. The research suggests that these
humble creatures, which have existed for roughly 50 million
years, can teach us something about how to adapt to a
quickly changing world.

Climate change or habitat loss drives some animals to


extinction. But others alter the development of their bodies
or behavior to suit a changing new environment,
demonstrating what scientists call “developmental
plasticity.” Dr. Snell-Rood wants to understand these fast
adaptations, especially those caused by human influence.
She has shown, for example, how road-salting has altered
the development of curbside butterflies.

Learning is a particularly powerful kind of plasticity.


Cabbage white butterflies, the nemesis of the veggie
gardener, flit from kale to cabbage to chard, in search of
the best host for their eggs after they hatch and larvae start
munching. In a 2009 paper in the American Naturalist, Dr.
Snell-Rood found that all the bugs start out with a strong
innate bias toward green plants such as kale. But some
adventurous and intelligent butterflies may accidentally
land on a nutritious red cabbage and learn that the red
leaves are good hosts, too. The next day those smart
insects will be more likely to seek out red, not just green,
plants.

In a 2011 paper in the journal Behavioral Ecology, Dr. Snell-


Rood showed that the butterflies who were better learners
also took longer to reach reproductive maturity, and they
produced fewer eggs overall. When she gave the insects a
hormone that sped up their development, so that they grew
up more quickly, they were worse at learning.

In a paper in the journal Animal Behaviour published this


year, Dr. Snell-Rood looked at another kind of butterfly
intelligence. The experimenters presented cabbage whites
with a choice between leaves that had been grown with
more or less fertilizer, and leaves that either did or did not
have a dead, carefully posed cabbage white pinned to
them.

Some of the insects laid eggs all over the place. But some
preferred the leaves that were especially nutritious. What’s
more, these same butterflies avoided leaves that were
occupied by other butterflies, where the eggs would face
more competition. The choosier butterflies, like the good
learners, produced fewer eggs overall. There was a trade-
off between simply producing more young and taking the
time and care to make sure those young survived.

In genetic selection, an organism produces many kinds of


offspring, and only the well-adapted survive. But once you
have a brain that can learn, even a butterfly brain, you can
adapt to a changing environment in a single generation.
That will ensure more reproductive success in the long run.

THE POWER OF PRETENDING: WHAT WOULD A HERO DO?

Sometime or other, almost all of us secretly worry that


we’re just impostors—bumbling children masquerading as
competent adults. Some of us may deal with challenges by
pretending to be a fictional hero instead of our
unimpressive selves. I vividly remember how channeling
Jane Austen’s Elizabeth Bennet got me through the
awkwardness of teen courtship. But can you really fake it
till you make it?

Two recent studies—by Rachel White of the University of


Pennsylvania, Stephanie Carlson of the University of
Minnesota and colleagues—describe what they call “The
Batman Effect.” Children who pretend that they are Batman
(or Dora the Explorer or other heroic figures) do better on
measures of self-control and persistence.

In the first study, published in 2015 in the journal


Developmental Science, the experimenters gave 48 5-year-
olds increasingly challenging problems that required them
to use their skills of control and self-inhibition. For
example, researchers might ask a child to sort cards
according to their color and then suddenly switch to sorting
them by shape. Between the ages of 3 and 7, children
gradually get better at these tasks.

The experimenter told some of the children to pretend to be


powerful fictional characters as they completed these
tasks. The children even put on costume props (like
Batman’s cape or Dora’s backpack) to help the pretending
along. The experimenter said, “Now, you’re Batman! In this
game, I want you to ask yourself, ‘Where does Batman think
the card should go?’ ” The pretenders did substantially
better than the children who tried to solve the task as
themselves.

In the second study, published last December in the journal


Child Development, the experimenters set up a task so
fiendish it might have come from the mind of the Joker.
They gave 4- and 6-year-old children a boring and
somewhat irritating task on the computer—pressing a
button when cheese appeared on the screen and
notpressing it when a cat appeared. The children also
received a tablet device with an interesting game on it.
The children were told that it was important for them to
finish the task on the computer but that, since it was so
boring, they could take a break when they wanted to play
the game on the tablet instead.

The children then received different kinds of instructions.


One group was told to reflect and ask themselves, “Am I
working hard?” Other children got to dress up like a favorite
heroic character—Batman, Dora and others. They were then
told to ask themselves, “Is Batman [or whoever they were
playing] working hard?” Once again, pretending helped even
the younger children to succeed. They spent more time on
the task and less on the distracting tablet.

Last month at a conference of the Cognitive Development


Society in Portland, Ore., Dr. Carlson discussed another
twist on this experiment. Was it just the distraction of
pretending that helped the children or something about
playing powerful heroes? She and her colleagues tried to
find out by having children pretend to be “Batman on a
terrible day.” They donned tattered capes and saw pictures
of a discouraged superhero—and they did worse on the
task than other children.

These studies provide a more complex picture of how self-


control and will power work. We tend to think of these
capacities as if they were intrinsic, as if some people just
have more control than others. But our attitude, what
psychologists call our “mind-set,” may be as important as
our abilities.

There is a longstanding mystery about why young children


pretend so much and what benefits such play provides.
The function of adult pretending—in fiction or drama—Is
equally mysterious. In the musical “The King and I,” Oscar
Hammerstein II wrote that by whistling a happy tune, “when
I fool the people I fear, I fool myself as well.” Pretending, for
children and adults, may give us a chance to become the
people we want to be.

THE POTENTIAL OF YOUNG INTELLECT, RICH OR POOR

Inequality starts early. In 2015, 23% of American children


under 3 grew up in poverty, according to the Census
Bureau. By the time children reach first grade, there are
already big gaps, based on parents’ income, in academic
skills like reading and writing. The comparisons look even
starker when you contrast middle-class U.S. children and
children in developing countries like Peru.

Can schooling reverse these gaps, or are they doomed to


grow as the children get older? Scientists like me usually
study preschoolers in venues like university preschools and
science museums. The children are mostly privileged, with
parents who have given them every advantage and are
increasingly set on giving instruction to even the youngest
children. So how can we reliably test whether certain skills
are the birthright of all children, rich or poor?

My psychology lab at the University of California, Berkeley


has been trying to provide at least partial answers, and my
colleagues and I published some of the results in the Aug.
23 edition of the journal Child Development. Our earlier
research has found that young children are remarkably
good at learning. For example, they can figure out cause-
and-effect relationships, one of the foundations of
scientific thinking.

How can we ask 4-year-olds about cause and effect? We


use what we call “the blicket detector”—a machine that
lights up when you put some combinations of different-
shaped blocks on it but not others. The subjects
themselves don’t handle the blocks or the machine; an
experimenter demonstrates it for them, using combinations
of one and two blocks.

In the training phase of the experiment, some of the young


children saw a machine that worked in a straightforward
way—some individual blocks made it go and others didn’t.
The rest of the children observed a machine that worked in
a more unusual way—only a combination of two specific
blocks made it go. We also used the demonstration to train
two groups of adults.

Could the participants, children and adults alike, use the


training data to figure out how a new set of blocks worked?
The very young children did. If the training blocks worked
the unusual way, they thought that the new blocks would
also work that way, and they used that assumption to
determine which specific blocks caused the machine to
light up. Butmost of the adults didn’t get it—they stuck with
the obvious ideathat only one block was needed to make
the machine run.

In the Child Development study of 290 children, we set out


to see what less-privileged children would do. We tested
4-year-old Americans in preschools for low-income children
run by the federal Head Start program, which also focuses
on health, nutrition and parent involvement. These children
did worse than middle-class children on vocabulary tests
and “executive function”—the ability to plan and focus. But
the poorer children were just as good as their wealthier
counterparts at finding the creative answer to the cause-
and effect problems.

Then, in Peru, we studied 4-year-olds in schools serving


families who mostly have come from the countryside and
settled in the outskirts of Lima, and who have average
earnings of less than $12,000 a year. These children also
did surprisingly well. They solved even the most difficult
tasks as well as the middle-class U.S. children (and did
better than adults in Peru or the U.S.).

Though the children we tested weren’t from wealthy


families, their parents did care enough to get them into
preschool. We didn’t look at how children with less social
support would do. But the results suggest that you don’t
need middle-class enrichment to be smart. All children may
be born with the ability to think like creative scientists. We
need to make sure that those abilities are nurtured, not
neglected.

DO MEN AND WOMEN HAVE DIFFERENT BRAINS?

A few weeks ago, James Damore lost his job at Google


after circulating a memo asserting, among other things,
that there are major personality and behavioral differences,
on average, between the sexes, based on biology. People
often think that we can distinguish between biological and
cultural causes of behavior and then debate which is more
important. An emerging scientific consensus suggests that
both sides of this debate are misguided.

Everything about our minds is both biological and cultural,


the result of complex, varied, multidirectional, cascading
interactions among genes and environments that we are
only just beginning to understand. For scientists, the key
question is exactly how differences in behavior, to the
extent that they exist, emerge and what they imply about
future behavior.

Daphna Joel of Tel Aviv University, a leading researcher


studying sex differences in the brain, summarizes new
research on these questions in recent articles (with
colleagues) published in Trends in Cognitive Sciences and
Philosophical Transactions of the Royal Society B. The
picture she draws is very different from our everyday
notions of how sex differences work.
When we think about brain and behavioral differences, we
tend to use the model of physical sex differences, such as
breasts and beards. These physical differences result from
complex interactions among genes, hormones and
environments, but in general, the specific features are
correlated: There is a male body type and a female one.
Breasts and uteruses go together, and they rarely go with
beards. Once we’re past puberty, changes in the
environment, by and large, will have little effect on the
distribution of those features.

Scientists have found that, on average, men and women do


differ with respect to some brain and behavioral features,
so it’s tempting to think that those differences are like the
physical differences—that there is a typical “male” or
“female” brain, and then some brains in the middle. As it
turns out, however, brains don’t follow this model.

Dr. Joel reports on research that she and her colleagues


did on more than a thousand brains, looking at data on
both structure and function. On average, Brain Area 1 may
be larger in men, while Brain Area 2 is larger in women. But
any individual man or woman may have a large Area 1 and
a small Area 2 or vice versa. There is little correlation
among the features. Brain differences have what they call a
“mosaic” pattern, and each of us is a mashup of different
“male” and “female” features. In fact, Dr. Joel found that
almost half the brains in the sample had a very “male”
version of one feature together with a very “female” version
of another.

Dr. Joel and colleagues also looked at 25 behaviors that


differ on average between men and women—such as
playing videogames, scrapbooking, being interested in
cosmetics and closely following sports—across 5,000
people. The scientists found a similar mosaic pattern of
behaviors: Again, the participants didn’t fall into two clear
types.

In addition, they reviewed various studies showing that


average sex differences can quickly be altered or even
reversed by changes in the environment. In rats, for
example, females typically have less-dense receptors in the
dorsal hippocampus, which is involved in memory, than do
males. But after both sexes experienced a few weeks of
mild stress, the pattern reversed: Now the males had less-
dense receptors than the females. The reasons for this
aren’t entirely clear yet. But if we see a particular sex
pattern in one environment—a standard 21st-century
university classroom or company, for instance—we have no
reason to believe that we will see the same sex pattern in a
different environment.

And, of course, one of the distinctive features of humans is


that we create new environments all the time. It’s
impossible to know beforehand just how an unprecedented
new environment will reshape our traits or just which
combinations of traits will turn out to be useful. In
evolution, as in policy, diversity is a good way to deal with
change.

WHALES HAVE COMPLEX CULTURE, TOO

How does a new song go viral, replacing the outmoded hits


of a few years ago? How are favorite dishes passed on
through the generations, from grandmother to grandchild?
Two new papers in the Proceedings of the National
Academy of Sciences examine the remarkable and
distinctive ability to transmit culture. The studies describe
some of the most culturally sophisticated beings on Earth.

Or, to be more precise, at sea. Whales and other cetaceans,


such as dolphins and porpoises, turn out to have more
complex cultural abilities than any other animal except us.

For a long time, people thought that culture was uniquely


human. But new studies show that a wide range of animals,
from birds to bees to chimpanzees, can pass on
information and behaviors to others. Whales have
especially impressive kinds of culture, which we are only
just beginning to understand, thanks to the phenomenal
efforts of cetacean specialists. (As a whale researcher
once said to me with a sigh, “Just imagine if each of your
research participants was the size of a 30-ton truck.”)

One of the new studies, by Ellen Garland of the University of


St. Andrews in Scotland and her colleagues, looked at
humpback whale songs. Only males sing them, especially
in the breeding grounds, which suggests that music is the
food of love for cetaceans, too—though the exact function
of the songs is still obscure.

The songs, which can last for as long as a half-hour, have a


complicated structure, much like human language or
music. They are made up of larger themes constructed
from shorter phrases, and they have the whale equivalent
of rhythm and rhyme. Perhaps that’s why we humans find
them so compelling and beautiful.

The songs also change as they are passed on, like human
songs. All the male whales in a group sing the same song,
but every few years the songs are completely transformed.
Researchers have trailed the whales across the Pacific,
recording their songs as they go. The whales learn the new
songs from other groups of whales when they mingle in the
feeding grounds. But how?

The current paper looked at an unusual set of whales that


produced rare hybrid songs—a sort of mashup of songs
from different groups. Hybrids showed up as the whales
transitioned from one song to the next. The hybrids
suggested that the whales weren’t just memorizing the
songs as a single unit. They were taking the songs apart
and putting them back together, creating variations using
the song structure.

The other paper, by Hal Whitehead of Dalhousie University


in Halifax, Nova Scotia, looked at a different kind of cultural
transmission in another species, the killer whale. The
humpback songs spread horizontally, passing from one
virile young thing to the next, like teenage fashions. But the
real power of culture comes when caregivers can pass on
discoveries to the next generation. That sort of vertical
transmission is what gives human beings their edge.

Killer whales stay with their mothers for as long as the


mothers live, and mothers pass on eating traditions. In the
same patch of ocean, you will find some whales that only
eat salmon and other whales that only eat mammals, and
these preferences are passed on from mother to child.

Even grandmothers may play a role. Besides humans, killer


whales are the only mammal whose females live well past
menopause. Those old females help to ensure the survival
of their offspring, and they might help to pass on a
preference for herring or shark to their grandchildren, too.
(That may be more useful than my grandchildren’s
legacy—a taste for Montreal smoked meat and bad Borscht
Belt jokes.)

Dr. Whitehead argues that these cultural traditions may


even lead to physical changes. As different groups of
whales become isolated from each other, the salmon
eaters in one group and the mammal eaters in another,
there appears to be a genetic shift affecting things such as
their digestive abilities. The pattern should sound familiar:
It’s how the cultural innovation of dairy farming led to the
selection of genes for lactose-tolerance in humans. Even in
whales, culture and nature are inextricably entwined.

HOW TO GET OLD BRAINS TO THINK LIKE YOUNG ONES

As the saying goes, old dogs—or mice or monkeys or


people—can’t learn new tricks. But why? Neuroscientists
have started to unravel the brain changes that are
responsible. And as a new paper in the journal Science
shows, they can even use these research findings to
reverse the process. Old mice, at least, really can go back
to learning like young ones.

The new study builds on classic work done by Michael


Merzenich at the University of California, San Francisco,
and colleagues. In the early 2000s they recorded the
electrical activity in brain cells and discovered that young
animals’ brains would change systematically when they
repeatedly heard something new. For instance, if a baby
monkey heard a new sound pattern many times, her
neurons (brain cells) would adjust to respond more to that
sound pattern. Older monkeys’ neurons didn’t change in the
same way.

At least part of the reason for this lies in neurotransmitters,


chemicals that help to connect one neuron to another.
Young animals have high levels of “cholinergic”
neurotransmitters that make the brain more plastic, easier
to change. Older animals start to produce inhibitory
chemicals that counteract the effect of the cholinergic
ones. They actually actively keep the brain from changing.

So an adult brain not only loses its flexibility but


suppresses it. This process may reflect the different
agendas of adults and children. Children explore; adults
exploit.
From an evolutionary perspective, childhood is an
adaptation designed to let animals learn. Baby animals get
a protected time when all they have to do is learn, without
worrying about actually making things happen or getting
things done. Adults are more focused on using what they
already know to act effectively and quickly. Inhibitory
chemicals may help in this process. Nature often works
according to the maxim, “If it ain’t broke, don’t fix it.” In this
case, there’s no need to change a brain that’s already
working well.

In the new research, Jay Blundon and colleagues at St.


Jude Children’s Research Hospital in Memphis, Tenn., tried
to restore early-learning abilities to adult mice. As in the
earlier experiments, they exposed the mice to a new sound
and recorded whether their neurons changed in response.
But this time the researchers tried making the adult mice
more flexible by keeping the inhibitory brain chemicals
from influencing the neurons.

In some studies, they actually changed the mouse genes


so that the animals no longer produced the inhibitors in the
same way. In others, they injected other chemicals that
counteracted the inhibitors. (Caffeine seems to work in this
way, by counteracting inhibitory neurotransmitters. That’s
why coffee makes us more alert and helps us to learn.)

In all of these cases in the St. Jude study, the adult brains
started to look like the baby brains. When the researchers
exposed the altered adult mice to a new sound, their
neurons responded differently, like babies’ neurons. The
mice got better at discriminating among the sounds, too.
The researchers also reversed the process, by getting
young brains to produce the inhibitory chemicals—and the
baby mice started acting like the adults.

The researchers suggest that these results may help in


some disorders that come with aging. But should we all try
to have childlike brains, perpetually sensitive to anything
new? Maybe not, or at least not all of the time. There may
be a tension between learning and acting, and adult brain
chemistry may help us to focus and ignore distractions.

Babies of any species are surrounded by a new world, and


their brain chemistry reflects that. Being a baby is like
discovering love on your first visit to Paris after three
double espressos. It’s a great way to be in some ways, but
you might wake up crying at 3 in the morning. There is
something to be said for grown-up stability and serenity.

WHAT THE BLIND SEE (AND DON'T) WHEN GIVEN SIGHT

In September 1678, a brilliant young Irish scientist named


William Molyneux married the beautiful Lucy Domville. By
November she had fallen ill and become blind, and the
doctors could do nothing for her. Molyneux reacted by
devoting himself to the study of vision.

He also studied vision because he wanted to resolve some


big philosophical issues: What kinds of knowledge are we
born with? What is learned? And does that learning have to
happen at certain stages in our lives? In 1688 he asked the
philosopher John Locke:Suppose someone who was born
blind suddenly regained their sight? What would they
understand about the visual world?

In the 17th century, Molyneux’s question was science


fiction. Locke and his peers enthusiastically debated and
speculated about the answer, but there was no way to
actually restore a blind baby’s sight. That’s no longer true
today. Some kinds of congenital blindness, such as
congenital cataracts, can be cured.

More than 300 years after Molyneux, another brilliant young


scientist, Pawan Sinha of the Massachusetts Institute of
Technology, has begun to find answers to his predecessor’s
questions. Dr. Sinha has produced a substantial body of
research, culminating in a paper last month in the
Proceedings of the National Academy of Sciences.

Like Molyneux, he was moved by both philosophical


questions and human tragedy. When he was growing up,
Dr. Sinha saw blind children begging on the streets of New
Delhi. So in 2005 he helped to start Project Prakash, from
the Sanskrit word for light. Prakash gives medical attention
to blind children and teenagers in rural India. To date, the
project has helped to treat more than 1,400 children,
restoring sight to many.

Project Prakash has also given scientists a chance to


answer Molyneux’s questions: to discover what we know
about the visual world when we’re born, what we learn and
when we have to learn it.

Dr. Sinha and his colleagues discovered that some abilities


that might seem to be learned show up as soon as children
can see. For example, consider the classic Ponzo visual
illusion. When you see two equal horizontal lines drawn on
top of a perspective drawing of receding railway ties, the
top line will look much longer than the bottom one. You
might have thought that illusion depends on learning about
distance and perspective, but the newly sighted children
immediately see the lines the same way.

On the other hand, some basic visual abilities depend more


on experience at a critical time. When congenital cataracts
are treated very early, children tend to develop fairly good
visual acuity—the ability to see fine detail. Children who are
treated much later don’t tend to develop the same level of
acuity, even after they have had a lot of visual experience.
In the most recent study, Dr. Sinha and colleagues looked
at our ability to tell the difference between faces and other
objects. People are very sensitive to faces; special brain
areas are dedicated to face perception, and babies can
discriminate pictures of faces from other pictures when
they are only a few weeks old.

The researchers studied five Indian children who were part


of the Prakash project, aged 9 to 17, born blind but given
sight. At first they couldn’t distinguish faces from similar
pictures. But over the next few months they learned the
skill and eventually they did as well as sighted children. So
face detection had a different profile from both visual
illusions and visual acuity—it wasn’t there right away, but it
could be learned relatively quickly.

The moral of the story is that the right answer about nature
versus nurture is…it’s complicated. And that sometimes, at
least, searching for the truth can go hand-in-hand with
making the world a better place.

HOW MUCH DO TODDLERS LEARN FROM PLAY?

Any preschool teacher will tell you that young children learn
through play, and some of the best known preschool
programs make play central, too. One of the most famous
approaches began after World War II around the northern
Italian city of Reggio Emilia and developed into a world-
wide movement. The Reggio Emilia programs, as well as
other model preschools like the Child Study Centers at
Berkeley and Yale, encourage young children to freely
explore a rich environment with the encouragement and
help of attentive adults.

The long-term benefits of early childhood education are


increasingly clear, and more states and countries are
starting preschool programs. But, the people who make
decisions about today’s preschool curricula often have
more experience with elementary schools. As the early-
childhood education researcher Erika Christakis details in
her book “The Importance of Being Little,” the result is more
pressure to make preschools like schools for older
students, with more school work and less free play.

Is play really that important?

An April study in the journal Developmental Psychology by


Zi Sim and Fei Xu of the University of California, Berkeley, is
an elegant example of a new wave of play research. The
researchers showed a group of 32 children, aged 2 and 3,
three different machines and blocks of varying shapes and
colors. The researchers showed the children that putting
some blocks, but not others, on the machines would make
them play music.

For half the children, the machines worked on a color rule—


red blocks made the red machine go, for instance, no
matter what shape they were. For the other children, the
devices worked on a shape rule, so triangular blocks, say,
made the triangle-shaped machine go.

Both sets of children then encountered a new machine,


orange and L-shaped, and a new set of blocks. The toddlers
trained with the color rule correctly used the orange block,
while those trained with the shape rule chose the L-shaped
block.

Next, the experimenter showed a different set of 32


toddlers the blocks and the machines and demonstrated
that one block made one machine play music, without any
instruction about the color or shape rules. Then she said,
“Oh no! I just remembered that I have some work to do.
While I’m doing my work, you can play with some of my
toys!” The experimenter moved to a table where she
pretended to be absorbed by work. Five minutes later she
came back.

As you might expect, the toddlers had spent those five


minutes getting into things—trying different blocks on the
machines and seeing what happened. Then the
experimenter gave the children the test with the orange
L-shaped machine. Had they taught themselves the rules?
Yes, the toddlers had learned the abstract color or shape
rules equally well just by playing on their own.

It’s difficult to systematically study something as


unpredictable as play. Telling children in a lab to play
seems to turn play into work. But clever studies like the one
in Developmental Psychology are starting to show
scientifically that children really do learn through play.

The inspirational sayings about play you find on the


internet—“play is the work of childhood” or “play is the best
form of research,” for example—aren’t just truisms. They
may actually be truths.

THE SCIENCE OF 'I WAS JUST FOLLOWING ORDERS'

There is no more chilling wartime phrase than “I was just


following orders.” Surely, most of us think, someone who
obeys a command to commit a crime is still acting
purposely, and following orders isn’t a sufficient excuse.
New studies help to explain how seemingly good people
come to do terrible things in these circumstances: When
obeying someone else, they do indeed often feel that they
aren’t acting intentionally.

Patrick Haggard, a neuroscientist at University College


London, has been engaged for years in studying our
feelings of agency and intention. But how can you measure
them objectively? Asking people to report such an elusive
sensation is problematic. Dr. Haggard found another way.
In 2002 he discovered that intentional action has a
distinctive but subtle signature: It warps your sense of
time.

People can usually perceive the interval between two


events quite precisely, down to milliseconds. But when you
act intentionally to make something happen—say, you
press a button to make a sound play—your sense of time is
distorted. You think that the sound follows your action
more quickly than it actually does—a phenomenon called
“intentional binding.” Your sense of agency somehow pulls
the action and the effect together.

This doesn’t happen if someone else presses your finger to


the button or if electrical stimulation makes your finger
press down involuntarily. And this distinctive time signature
comes with a distinctive neural signature too.

More recent studies show that following instructions can at


times look more like passive, involuntary movement than
like willed intentional action. In the journal Psychological
Science last month, Peter Lush of the University of Sussex,
together with colleagues including Dr. Haggard, examined
hypnosis. Hypnosis is puzzling because people produce
complicated and surely intentional actions—for example,
imitating a chicken—but insist that they were involuntary.

The researchers hypnotized people and then suggested


that they press a button making a sound. The hypnotized
people didn’t show the characteristic time-distortion
signature of agency. They reported the time interval
between the action and the sound accurately, as if
someone else had pressed their finger down. Hypnosis
really did make the actions look less intentional.

In another study, Dr. Haggard and colleagues took off from


the famous Milgram experiments of the 1960s. Social
psychologist Stanley Milgram discovered that ordinary
people were willing to administer painful shocks to
someone else simply because the experimenter told them
to. In Dr. Haggard’s version, reported in the journal Current
Biology last year, volunteers did the experiment in pairs. If
they pressed a button, a sound would play, the other person
would get a brief but painful shock and they themselves
would get about $20; each “victim” later got a chance to
shock the aggressor.

Sometimes the participants were free to choose whether or


not to press the button, and they shocked the other person
about half the time. At other times the experimenter told
the participants what to do.

In the free-choice trials, the participants showed the usual


“intentional binding” time distortion: They experienced the
task as free agents. Their brain activity, recorded by an
electroencephalogram, looked intentional too.

But when the experimenter told participants to shock the


other person, they did not show the signature of intention,
either in their time perception or in their brain responses.
They looked like people who had been hypnotized or whose
finger was moved for them, not like people who had set out
to move their finger themselves. Following orders was
apparently enough to remove the feeling of free will.

These studies leave some big questions. When people


follow orders, do they really lose their agency or does it just
feel that way? Is there a difference? Most of all, what can
we do to ensure that this very human phenomenon doesn’t
lead to more horrific inhumanity in the future?

HOW MUCH SCREEN TIME IS SAFE FOR TEENS?


Almost every time I give a talk, somebody asks me how
technology affects children. What starts out as a question
almost always turns into an emphatic statement that
technology is irreparably harming children’s development.
The idea that computers, smartphones and videogames
gravely harm young people runs deep in our culture.

I always give these worried parents the same answer: We


won’t know for sure until we do careful, rigorous, long-term
research. But the evidence that we already have, as
opposed to anecdote and speculation, is reassuring.

A new paper in the journal Psychological Science by two


British researchers, Andrew K. Przybylski at the University
of Oxford and Netta Weinstein at the University of Cardiff,
adds to the increasing number of studies suggesting that
fears about technology are overblown. It reports on a study
of more than 120,000 15-year-olds from the national pupil
database of the U.K.’s Department for Education.

Anything you do in excess can be harmful. Things like


smoking are harmful at any dose and become more so as
doses rise. With smoking the advice is simple: Don’t. It’s
different with eating, where to avoid harm we need to
follow the Goldilocks principle: not too much or too little,
but just right.

Researchers designed the new study to find out whether


digital screen use was more like smoking or eating. They
also set up the study to figure out what “too much” or “too
little” might be, and how big an effect screen time has
overall.

The questionnaire asked the participants to say how often


they had experienced 14 positive mental states, like
thinking clearly, solving problems well and feeling
optimistic, energetic or close to other people. The
questions covered the preceding two weeks, and replies
were on a five-point scale ranging from none of the time to
all of the time. The study used the teenagers’ own
reports—a limitation—but their individual responses
seemed to cohere, suggesting that they were generally
accurate. Teens who reported getting less sleep, for
instance, had lower overall scores for their mental well-
being.

The teens independently reported how many hours they


spent each day playing videogames, watching
entertainment, using the computer for other reasons and
using a smartphone screen. The researchers recorded the
teens’ gender and ethnicity and used their postal codes to
estimate their socioeconomic status.

The researchers then looked at the relationship between


screen time and the answers to the questionnaire. After
controlling for factors like gender, ethnicity and class, they
found a Goldilocks effect. Up to a certain point screen time
either positively correlated with mental well-being or
showed no relationship to it. This fits with other studies
showing that teens use computers and phones as a way to
connect with other people.

The tipping point varied with how the children used their
screens and whether they did it on weekdays or weekends,
but the acceptable amount was fairly substantial—about
1½ to two hours a day of smartphone and computer use
and three to four hours a day of videogames and
entertainment. The teenagers often reported doing several
of these things at once, but they clearly spent a fair amount
of time in front of screens. When their screen time went
beyond these bounds, it negatively correlated with mental
well-being.

What’s more, screen time accounted for less than 1


percentage point of the variability in mental well-being.
That’s less than a third as much as other factors like eating
breakfast or getting enough sleep.

There are still lots of reasons for worrying about your


teenager, of course. They’re at special risk for car
accidents, suicide and gun violence. But, with a little
common sense, screen time may not be among those
perils.

Appeared in the March 18, 2017, print edition as 'The


Goldilocks Solution for Teens And Their Screens.'

WHEN CHILDREN BEAT ADULTS AT SEEING THE WORLD

A few years ago, in my book “The Philosophical Baby,” I


speculated that children might actually be more conscious,
or at least more aware of their surroundings, than adults.
Lots of research shows that we adults have a narrow
“spotlight” of attention. We vividly experience the things
that we focus on but are remarkably oblivious to everything
else. There’s even a term for it: “inattentional blindness.” I
thought that children’s consciousness might be more like a
“lantern,” illuminating everything around it.

When the book came out, I got many fascinating letters


about how children see more than adults. A store detective
described how he would perch on an upper balcony
surveying the shop floor. The grown-ups, including the
shoplifters, were so focused on what they were doing that
they never noticed him. But the little children, trailing
behind their oblivious parents, would glance up and wave.

Of course, anecdotes and impressions aren’t scientific


proof. But a new paper in press in the journal Psychological
Science suggests that the store detective and I just might
have been right.
One of the most dramatic examples of the adult spotlight is
“change blindness.” You can show people a picture,
interrupt it with a blank screen, and then show people the
same picture with a change in the background. Even when
you’re looking hard for the change, it’s remarkably difficult
to see, although once someone points it out, it seems
obvious. You can see the same thing outside the lab. Movie
directors have to worry about “continuity” problems in their
films because it’s so hard for them to notice when
something in the background has changed between takes.

To study this problem, Daniel Plebanek and Vladimir


Sloutsky at Ohio State University tested how much children
and adults notice about objects and how good they are at
detecting changes. The experimenters showed a series of
images of green and red shapes to 34 children, age 4 and 5,
and 35 adults. The researchers asked the participants to
pay attention to the red shapes and to ignore the green
ones. In the second part of the experiment, they showed
another set of images of red and green shapes to
participants and asked: Had the shapes remained the same
or were they different?

Adults were better than children at noticing when the red


shapes had changed. That’s not surprising: Adults are
better at focusing their attention and learning as a result.
But the children beat the adults when it came to the green
shapes. They had learned more about the unattended
objects than the adults and noticed when the green shapes
changed. In other words, the adults only seemed to learn
about the object in their attentional spotlight, but the
children learned about the background, too.

We often say that young children are bad at paying


attention. But what we really mean is that they’re bad at not
paying attention, that they don’t screen out the world as
grown-ups do. Children learn as much as they can about
the world around them, even if it means that they get
distracted by the distant airplane in the sky or the speck of
paper on the floor when you’re trying to get them out the
door to preschool.

Grown-ups instead focus and act effectively and swiftly,


even if it means ignoring their surroundings. Children
explore, adults exploit. There is a moral here for adults, too.
We are often so focused on our immediate goals that we
miss unexpected developments and opportunities.
Sometimes by focusing less, we can actually see more.

So if you want to expand your consciousness, you can try


psychedelic drugs, mysticism or meditation. Or you can
just go for a walk with a 4-year-old.

FLYING HIGH: RESEARCH UNVEILS BIRDS' LEARNING


POWER

Shortly after I arrived at Oxford as a graduate student,


intrigued by the new culture around me, I faced a very
English whodunit. I had figured out how to get glass bottles
of milk delivered to my doorstep—one of the exotic old-
fashioned grace-notes of English domestic life. But every
morning I would discover little holes drilled through the foil
lids of the bottles.

Waking up early one day to solve the mystery, I saw a pretty


little English songbird, the Great Tit, using its beak to steal
my cream. It turned out that around 1920, Great Tits had
learned how to drill for cream, and for the next 50 years the
technique spread throughout England. (Why the dairies
never got around to changing the tops remains an English
mystery.)

How did these birds learn to steal? Could one bird have
taught the others, like an ornithological version of the Artful
Dodger in “Oliver Twist”? Until very recently, biologists
would have assumed that each bird independently
discovered the cream-pinching trick. Cultural innovation
and transmission were the preserve of humans, or at least
primates.

But new studies show just how many kinds of animals can
learn from others—a topic that came up at the recent
colloquium “The Extension of Biology Through Culture,”
organized by the National Academy of Sciences in Irvine,
Calif. There we heard remarkable papers that showed the
extent of cultural learning in animals ranging from
humpback whales to honeybees.

At the colloquium, Lucy Aplin of Oxford University reported


on experiments, published in the journal Nature in 2015,
with the very birds who stole my cream. Dr. Aplin and her
colleagues studied the Great Tits in Wytham Woods near
Oxford. Biologists there fitted out hundreds of birds, 90% of
the population, with transponder tags, like bird bar codes,
that let them track the birds’ movements.

Dr. Aplin showed the birds a feeder with a door painted half
blue and half red. The birds lived in separate groups in
different parts of the wood. Two birds from one group
learned that when they pushed the blue side of the feeder
from left to right, they got a worm. Another two birds from
another group learned the opposite technique; they only got
the worm when they pushed the red side from right to left.
Then the researchers released the birds back into the wild
and scattered feeders throughout the area. The feeders
would work with either technique.

The researchers tracked which birds visited the feeders


and at what time, as well as which technique they used.
The wild birds rapidly learned by watching those trained in
the lab. The blue-group birds pushed the blue side, while
the red group pushed the red. And new birds who visited a
feeder imitated the birds at that site, even though they
could easily have learned that the other technique worked,
too.

Then the researchers used a social-network analysis to


track just which birds liked to hang out with which other
birds. Like people on Facebook, birds were much more
likely to learn from other birds in their social network than
from birds they spent less time with. Also like humans,
young birds were more likely to adopt the new techniques
than older ones.

Most remarkably, the traditions continued into the next


year. Great Tits don’t live very long, and only about 40% of
the birds survive to the next season. But though the birds
had gone, their discoveries lived on. The next generation of
the blue group continued to use the blue technique.

We often assume that only animals who are closely related


to us will share our cognitive abilities. The new research
suggests that very different species can evolve impressive
learning skills that suit their particular environmental niche.
Great Tits—like honeybees, humpbacks and humans—are
sophisticated foragers who learn to adapt to new
environments. The young American graduate student and
the young Great Tit at her door both learned to become
masters of the British bottle.

WHEN AWE-STRUCK, WE FEEL BOTH SMALLER AND


LARGER

I took my grandchildren this week to see “The Nutcracker.”


At the crucial moment in the ballet, when the Christmas
tree magically expands, my 3-year-old granddaughter, her
head tilted up, eyes wide, let out an impressive,
irrepressible “Ohhhh!”

The image of that enchanted tree captures everything


marvelous about the holiday, for believers and secular
people alike. The emotion that it evokes makes braving the
city traffic and crowds worthwhile.

What the children, and their grandmother, felt was awe—


that special sense of the vastness of nature, the universe,
the cosmos, and our own insignificance in comparison.
Awe can be inspired by a magnificent tree or by Handel’s
“Hallelujah Chorus” or by Christmas Eve mass in the Notre-
Dame de Paris cathedral.

But why does this emotion mean so much to us? Dacher


Keltner, a psychologist who teaches (as I do) at the
University of California, Berkeley, has been studying awe for
15 years. He and his research colleagues think that the
emotion is as universal as happiness or anger and that it
occurs everywhere with the same astonished gasp. In one
study Prof. Keltner participated in, villagers in the
Himalayan kingdom of Bhutan who listened to a brief
recording of American voices immediately recognized the
sound of awe.

Prof. Keltner’s earlier research has also shown that awe is


good for us and for society. When people experience awe—
looking up at a majestic sequoia, for example—they
become more altruistic and cooperative. They are less
preoccupied by the trials of daily life.

Why does awe have this effect? A new study, by Prof.


Keltner, Yang Bai and their colleagues, conditionally
accepted in the Journal of Personality and Social
Psychology, shows how awe works its magic.

Awe’s most visible psychological effect is to shrink our


egos, our sense of our own importance. Ego may seem
very abstract, but in the new study the researchers found a
simple and reliable way to measure it. The team showed
their subjects seven circles of increasing size and asked
them to pick the one that corresponded to their sense of
themselves. Those who reported feeling more important or
more entitled selected a bigger circle; they had bigger egos.

The researchers asked 83 participants from the U.S. and 88


from China to keep a diary of their emotions. It turned out
that, on days when they reported feeling awe, they selected
smaller circles to describe themselves.

Then the team arranged for more than a thousand tourists


from many countries to do the circle test either at the
famously awe-inspiring Yosemite National Park or at
Fisherman’s Wharf on San Francisco’s waterfront, a popular
but hardly awesome spot. Only Yosemite made
participants from all cultures feel smaller.

Next, the researchers created awe in the lab, showing


people awe-inspiring or funny video clips. Again, only the
awe clips shrank the circles. The experimenters also asked
people to draw circles representing themselves and the
people close to them—with the distance between circles
indicating how close they felt to others. Feelings of awe
elicited more and closer circles; the awe-struck participants
felt more social connection to others.

The team also asked people to draw a ladder and represent


where they belonged on it—a reliable measure of status.
Awe had no effect on where people placed themselves on
this ladder—unlike an emotion such as shame, which takes
people down a notch in their own eyes. Awe makes us feel
less egotistical, but at the same time it expands our sense
of well-being rather than diminishing it.
The classic awe-inspiring stimuli in these studies remind
people of the vastness of nature: tall evergreens or
majestic Yosemite waterfalls. But even very small stimuli
can have the same effect. Another image of this season, a
newborn child, transcends any particular faith, or lack of
faith, and inspires awe in us all.

THE BRAIN MACHINERY BEHIND DAYDREAMING

Like most people, I sometimes have a hard time


concentrating. I open the file with my unwritten column and
my mind stubbornly refuses to stay on track. We all know
that our minds wander. But it’s actually quite peculiar. Why
can’t I get my own mind to do what I want? What
subversive force is leading it astray?

A new paper in Nature Reviews Neuroscience by Kalina


Christoff of the University of British Columbia and
colleagues (including the philosopher Zachary Irving, who
is a postdoctoral fellow in my lab) reviews 20 years of
neuroscience research. The authors try to explain how our
brains make—and profit from—our wandering thoughts.

When neuroscientists first began to use imaging


technology, they noticed something odd: A distinctive brain
network lighted up while the subjects waited for the
experiment to begin. The scientists called it “the default
network.” It activated when people were daydreaming,
woolgathering, recollecting and imagining the future. Some
studies suggest that we spend up to nearly 50% of our
waking lives in this kind of “task-unrelated thought”
—almost as much time as we spend on tasks.

Different parts of the brain interact to bring about various


types of mind-wandering, the paper suggests. One part of
the default network, associated with the memory areas in
the medial temporal lobes, seems to spontaneously
generate new thoughts, ideas and memories in a random
way pretty much all the time. It’s the spring at the source of
that stream of consciousness.

When you dream, other parts of the brain shut down, but
this area is particularly active. Neuroscientists have
recorded in detail rats’ pattern of brain-cell activity during
the REM (rapid eye movement) sleep that accompanies
dreaming. Rat brains, as they dream, replay and recombine
the brain activity that happened during the day. This
random remix helps them (and us) learn and think in new
ways.

Other brain areas constrain and modify mind-wandering—


such as parts of our prefrontal cortex, the control center of
the brain. In my case, this control system may try to pull my
attention back to external goals like writing my column. Or
it may shape my wandering mind toward internal goals like
planning what I’ll make for dinner.

Dr. Christoff and colleagues suggest that creative thought


involves a special interaction between these control
systems and mind-wandering. In this activity, the control
system holds a particular problem in mind but permits the
brain to wander enough to put together old ideas in new
ways and find creative solutions.

At other times, the article’s authors argue, fear can capture


and control our wandering mind. For example, subcortical
emotional parts of the brain, like the amygdala, are
designed to quickly detect threats. They alert the rest of the
brain, including the default network. Then, instead of
turning to the task at hand or roaming freely, our mind
travels only to the most terrible, frightening futures. Fear
hijacks our daydreams.

Anxiety disorders can exaggerate this process. A therapist


once pointed out to me that, although it was certainly true
that the worst might happen, my incessant worry meant
that I was already choosing to live in that terrible future in
my own head, even before it actually happened.

From an evolutionary point of view, it makes sense that


potential threats can capture our minds—we hardly ever
know in advance which fears will turn out to be justified.
But the irony of anxiety is that fear can rob us of just the
sort of imaginative freedom that could actually create a
better future.

BABIES SHOW A CLEAR BIAS--TO LEARN NEW THINGS

Why do we like people like us? We take it for granted that


grown-ups favor the “in-group” they belong to and that only
the hard work of moral education can overcome that
preference. There may well be good evolutionary reasons
for this. But is it a scientific fact that we innately favor our
own?

A study in 2007, published in the Proceedings of the


National Academy of Sciences by Katherine Kinzler and her
colleagues, suggested that even babies might prefer their
own group. The authors found that 10-month-olds
preferred to look at people who spoke the same language
they did. In more recent studies, researchers have found
that babies also preferred to imitate someone who spoke
the same language. So our preference for people in our
own group might seem to be part of human nature.

But a new study in the same journal by Katarina Begus of


Birkbeck, University of London and her colleagues
suggests a more complicated view of humanity. The
researchers started out exploring the origins of curiosity.
When grown-ups think that they are about to learn
something new, their brains exhibit a pattern of activity
called a theta wave. The researchers fitted out 45 11-
month-old babies with little caps covered with electrodes
to record brain activity. The researchers wanted to see if
the babies would also produce theta waves when they
thought that they might learn something new.

The babies saw two very similar-looking people interact


with a familiar toy like a rubber duck. One experimenter
pointed at the toy and said, “That’s a duck.” The other just
pointed at the object and instead of naming it made a
noise: She said “oooh” in an uninformative way.

Then the babies saw one of the experimenters pick up an


unfamiliar gadget. You would expect that the person who
told you the name of the duck could also tell you about this
new thing. And, sure enough, when the babies saw the
informative experimenter, their brains produced theta
waves, as if they expected to learn something. On the other
hand, you might expect that the experimenter who didn’t
tell you anything about the duck would also be unlikely to
help you learn more about the new object. Indeed, the
babies didn’t produce theta waves when they saw this
uninformative person.

This experiment suggested that the babies in the earlier


2007 study might have been motivated by curiosity rather
than by bias. Perhaps they preferred someone who spoke
their own language because they thought that person could
teach them the most.

So to test this idea, the experimenters changed things a


little. In the first study, one experimenter named the object,
and the other didn’t. In the new study, one experimenter
said “That’s a duck” in English—the babies’ native language
—while the other said, “Mira el pato,” describing the duck in
Spanish—an unfamiliar language. Sure enough, their brains
produced theta waves only when they saw the English
speaker pick up the new object. The babies responded as if
the person who spoke the same language would also tell
them more about the new thing.

So 11-month-olds already are surprisingly sensitive to new


information. Babies leap at the chance to learn something
new—and can figure out who is likely to teach them. The
babies did prefer the person in their own group, but that
may have reflected curiosity, not bias. They thought that
someone who spoke the same language could tell them
the most about the world around them.

There is no guarantee that our biological reflexes will


coincide with the demands of morality. We may indeed
have to use reason and knowledge to overcome inborn
favoritism toward our own group. But the encouraging
message of the new study is that the desire to know—that
keystone of human civilization—may form a deeper part of
our nature than mistrust and discrimination.

OUR NEED TO MAKE AND ENFORCE RULES STARTS VERY


YOUNG

Hundreds of social conventions govern our lives: Forks go


on the left, red means stop and don’t, for heaven’s sake,
blow bubbles in your milk. Such rules may sometimes
seem trivial or arbitrary. But our ability to construct them
and our expectation that everyone should follow them are
core mechanisms of human culture, law and morality.
Rules help turn a gang of individuals into a functioning
community.

When do children understand and appreciate rules?


Philosophers like Thomas Hobbes and Jean-Jacques
Rousseau, as well as modern psychologists, have assumed
that learning to follow social rules is really hard. For a
pessimist like the 17th-century Hobbes, rules and
conventions are all that keeps us from barbarous anarchy,
and we learn them only through rigorous reward and
punishment. For a romantic like the 18th-century Rousseau,
children are naturally innocent and moral, and their
development can only be distorted by the pressures of
social convention.

For parents facing the milk-bubble issue, it can seem


impossibly difficult to get your children to obey the rules
(though you may wonder why you are making such a fuss
about bubbles). But you also may notice that even a 3-year-
old can get quite indignant about the faux pas of other
3-year-olds.

In a clever 2008 study, the psychologists Hannes Rakoczy,


Felix Warneken and Michael Tomasello showed
systematically how sensitive very young children are to
rules. The experimenter told 3-year-olds, “This is my game,
I’m going to dax” and then showed them an arbitrary action
performed on an object, like pushing a block across a
board with a stick until it fell into a gutter. Next, the
experimenter “accidentally” performed another arbitrary
action, like lifting the board until the block fell in the gutter,
and said, “Oh no!”

Then Max the puppet appeared and said, “I’m going to ‘dax.’
” He either “daxed” the “right” way with the stick or did it the
“wrong” way by lifting the board, tipping it and dropping the
block in the gutter directly. When the puppet did the wrong
thing, violating the rules of the game, the children reacted
by indignantly protesting and saying things like, “No, not
like that!” or “Use the stick!”

Dr. Tomasello, working with Michael Schmidt and other


colleagues, has taken these studies a step farther. The new
research,which appeared this year in the journal
Psychological Science, showed that young children are apt
to see social rules and conventions even when they aren’t
really there.

The children interpreted a single arbitrary action as if it


were a binding social convention. This time the
experimenter did not act as if the action were part of a
game or say that he was “daxing.” He just did something
arbitrary—for example, carefully using the flat piece at the
end of a long stick to push a toy forward. Then the puppet
either did the same thing or put the whole stick on top of
the toy, pushing it forward. Even though the puppet reached
the same goal using the second method, children protested
and tried to get him to do things the experimenter’s way.

In a control condition, the experimenter performed the


same action but gave the appearance of doing it by
accident, fiddling with the stick without looking at the toy
and saying “oops” when the stick hit the toy and pushed it
forward. Now the children no longer protested when the
puppet acted differently. They did not deduce a rule from
the experimenter’s apparently accidental action, and they
did not think that the puppet should do the same thing.

The Schmidt group argues that children are “promiscuously


normative.” They interpret actions as conventions and rules
even when that isn’t necessarily the case. Rules dominate
our lives, from birth to death. Hobbes and Rousseau both
got it wrong. For human beings, culture and nature are not
opposing forces—culture is our nature.

SHOULD WE LET TODDLERS PLAY WITH SAWS AND


KNIVES?

Last week, I stumbled on a beautiful and moving picture of


young children learning. It’s a fragment of a silent 1928 film
from the Harold E. Jones Child Study Center in Berkeley,
Calif., founded by a pioneer in early childhood education.
The children would be in their 90s now. But in that long-
distant idyll, in their flapper bobs and old-fashioned
smocks, they play (cautiously) with a duck and a rabbit,
splash through a paddling pool, dig in a sandbox, sing and
squabble.

Suddenly, I had a shock. A teacher sawed a board in half,


and a boy, surely no older than 5, imitated him with his own
saw, while a small girl hammered in nails. What were the
teachers thinking? Why didn’t somebody stop them?

My 21st-century reaction reflects a very recent change in


the way that we think about children, risk and learning. In a
recent paper titled “Playing with Knives” in the journal Child
Development, the anthropologist David Lancy analyzed how
young children learn across different cultures. He compiled
a database of anthropologists’ observations of parents and
children, covering over 100 preindustrial societies, from the
Dusan in Borneo to the Pirahã in the Amazon and the Aka
in Africa. Then Dr. Lancy looked for commonalities in what
children and adults did and said.

In recent years, the psychologist Joseph Henrich and


colleagues have used the acronym WEIRD—that is,
Western, educated, industrialized, rich and democratic—to
describe the strange subset of humans who have been the
subject of almost all psychological studies. Dr. Lancy’s
paper makes the WEIRDness of our modern attitudes
toward children, for good or ill, especially vivid.

He found some striking similarities in the preindustrial


societies that he analyzed. Adults take it for granted that
young children are independently motivated to learn and
that they do so by observing adults and playing with the
tools that adults use—like knives and saws. There is very
little explicit teaching.
And children do, in fact, become competent surprisingly
early. Among the Maniq hunter-gatherers in Thailand,
4-year-olds skin and gut small animals without mishap. In
other cultures, 3- to 5-year-olds successfully use a hoe,
fishing gear, blowpipe, bow and arrow, digging stick and
mortar and pestle.

The anthropologists were startled to see parents allow and


even encourage their children to use sharp tools. When a
Pirahã toddler played with a sharp 9-inch knife and dropped
it on the ground, his mother, without interrupting her
conversation, reached over and gave it back to him. Dr.
Lancy concludes: “Self-initiated learners can be seen as a
source for both the endurance of culture and of change in
cultural patterns and practices.”

He notes that, of course, early knife skills can come at the


cost of severed fingers. To me, like most adults in my
WEIRD culture, that is far too great a risk even to consider.

But trying to eliminate all such risks from children’s lives


also might be dangerous. There may be a psychological
analog to the “hygiene hypothesis” proposed to explain the
dramatic recent increase in allergies. Thanks to hygiene,
antibiotics and too little outdoor play, children don’t get
exposed to microbes as they once did. This may lead them
to develop immune systems that overreact to substances
that aren’t actually threatening—causing allergies.

In the same way, by shielding children from every possible


risk, we may lead them to react with exaggerated fear to
situations that aren’t risky at all and isolate them from the
adult skills that they will one day have to master. We don’t
have the data to draw firm causal conclusions. But at least
anecdotally, many young adults now seem to feel
surprisingly and irrationally fragile, fearful and vulnerable: I
once heard a high schooler refuse to take a city bus
“because of liability issues.”

Drawing the line between allowing foolhardiness and


inculcating courage isn’t easy. But we might have
something to learn from the teachers and toddlers of 1928.

WANT BABIES TO LEARN FROM VIDEO? TRY


INTERACTIVE

Last week I was in Australia more than 7,000 miles away


from my grandchildren, and I missed them badly. So, like
thousands of other grandmothers around the world, I
checked into FaceTime. My 11-month-old grandson looked
back with his bright eyes and charming smile. Were we
really communicating or was that just my grandmotherly
imagination?

After all, there is a lot of research on the “video deficit”:


Babies and young children are much less likely to learn
from a video than from a live person. Judy DeLoache of the
University of Virginia and colleagues, for example, looked
at “baby media” in a study in the journal Psychological
Science in 2011. Babies watched a popular DVD meant to
teach them new words. Though they saw the DVD
repeatedly, they were no more likely to learn the words than
babies in a control group. But in live conversation with their
parents, babies did learn the words.

Is the problem video screens in general? Or is it because


little video-watchers aren’t communicating in real time with
another person—as in a video chat?

We now have an answer that should bring joy to the hearts


of distant grandparents everywhere. In a study published
last month in the journal Developmental Science, Lauren
Myers and colleagues at Lafayette College gave tablet
computers to two groups of 1-year-olds and their families.
In one group, the babies video-chatted on the tablet with an
experimenter they had never seen before. She talked to
them, read them a baby book with the chorus “peekaboo
baby!” and introduced some new words. The other group
watched a prerecorded video in which the experimenter
talked and read a book in a similar way. For each group, the
experiment took place six times over a week.

Babies in both groups watched the tablet attentively. But in


the live group, they also coordinated their own words and
actions with the actions of the experimenter, picking just
the right moment to shout “peekaboo!” They didn’t do that
with the video.

At the end of the week, the experimenter appeared in


person with another young woman. Would the babies
recognize their video partner in real life? And would they
learn from her?

The babies who had seen the experimenter on prerecorded


video didn’t treat her differently from the other woman and
didn’t learn the new words. They did no better than chance
on a comprehension test. But the babies who had
interacted with the experimenter preferred to play with her
over the new person. The older babies in the interactive
group had also learned many of the new words. And the
babies in the FaceTime group were significantly more likely
to learn the “peekaboo” routine than the video-group
babies.

These results fit with those of other studies. Babies seem


to use “contingency”—the pattern of call and response
between speaker and listener—to identify other people. In
one classic experiment,Susan Johnson at Ohio State
University and colleagues showed 1-year-olds a stylized
robot (really just two fuzzy blobs on top of one another).
Sometimes the robot beeped and lit up when the baby
made a noise—it responded to what the baby did.
Sometimes the robot beeped and lit up randomly, and then
the top blob turned right or left. The babies turned their
heads and followed the gaze of the robot when it was
responsive, treating it like a person, but they ignored it
when its movements had nothing to do with how the
babies behaved.

Real life has much to be said for it, and many studies have
shown that touch is very important for babies (and adults).
But it’s interesting that what counts in a relationship, for all
of us, isn’t so much how someone else looks or feels, or
whether it’s 3-D grandmom or grandmom in Australia on a
screen. What matters is how we respond to the other
person and how they respond to us.

A SMALL FIX IN MIND-SET CAN KEEP STUDENTS IN


SCHOOL

Education is the engine of social mobility and equality. But


that engine has been sputtering, especially for the children
who need help the most. Minority and disadvantaged
children are especially likely to be suspended from school
and to drop out of college. Why? Is it something about the
students or something about the schools? And what can
we do about it?

Two recent studies published in the Proceedings of the


National Academy of Sciences offer some hope. Just a few
brief, inexpensive, online interventions significantly reduced
suspension and dropout rates, especially for
disadvantaged groups. That might seem surprising, but it
reflects the insights of an important new psychological
theory.

The psychologist Carol Dweck at Stanford has argued that


both teachers and students have largely unconscious
“mind-sets”—beliefs and expectations—about themselves
and others and that these can lead to a cascade of self-
fulfilling prophecies. A teacher may start out, for example,
being just a little more likely to think that an African-
American student will be a troublemaker. That makes her a
bit more punitive in disciplining that student. The student,
in turn, may start to think that he is being treated unfairly,
so he reacts to discipline with more anger, thus confirming
the teacher’s expectations. She reacts still more punitively,
and so on. Without intending to, they can both end up stuck
in a vicious cycle that greatly amplifies what were originally
small biases.

In the same way, a student who is the first in her family to


go to college may be convinced that she won’t be able to fit
in socially or academically. When she comes up against the
inevitable freshman hurdles, she interprets them as
evidence that she is doomed to fail. And she won’t ask for
help because she feels that would just make her weakness
more obvious. She too ends up stuck in a vicious cycle.

Changing mind-sets is hard—simply telling people that they


should think differently often backfires. The two new
studies used clever techniques to get them to take on
different mind-sets more indirectly. The studies are also
notable because they used the gold-standard method of
randomized, controlled trials, with over a thousand
participants total.

In the first study, by Jason Okonofua, David Paunesku and


Greg Walton at Stanford, the experimenters asked a group
of middle-school math teachers to fill out a set of online
materials at the start of school. The materials described
vivid examples of how you could discipline students in a
respectful, rather than a punitive, way.

But the most important part was a section that asked the
teachers to provide examples of how they themselves used
discipline respectfully. The researchers told the
participants that those examples could be used to train
others—treating the teachers as experts with something to
contribute. Another group of math teachers got a control
questionnaire about using technology in the classroom.

At the end of the school year, the teachers who got the first
package had only half as many suspensions as the control
group—a rate of 4.6% compared with 9.8%.

In the other study, by Dr. Dweck and her colleagues, the


experimenters gave an online package to disadvantaged
students from a charter school who were about to enter
college. One group got materials saying that all new
students had a hard time feeling that they belonged but
that those difficulties could be overcome. The package
also asked the students to write an essay describing how
those challenges could be met—an essay that could help
other students. A control group answered similar questions
about navigating buildings on the campus.

Only 32% of the control group were still enrolled in college


by the end of the year, but 45% of the students who got the
mind-set materials were enrolled.

The researchers didn’t tell people to have a better attitude.


They just encouraged students and teachers to articulate
their own best impulses. That changed mind-sets—and
changed lives.

ALIENS RATE EARTH: SKIP THE PRIMATES, COME FOR


THE CROWS

What makes human beings so special? In his new book on


animal intelligence, the primatologist Frans de Waal shows
that crows and chimps have many of the abilities that we
once thought were uniquely human—like using tools and
imagining the future. So why do we seem so different from
other animals?

One theory is that when Homo sapiens first evolved, some


200,000 years ago, we weren’t that different: We were just a
little better than other primates at cultural transmission, at
handing new information on to the next generation. Our
human success is the result of small, cumulative, cultural
changes over many generations rather than of any single
great cognitive leap.

For the first 150,000 years or so of our existence, isolated


groups of humans occasionally made distinctive
innovations—for example, creating jewelry out of shells.
But those innovations didn’t stick. It was only around
50,000 years ago, barely yesterday in evolutionary time,
that a critical mass of humans initiated the unending
cascade of inventions that shapes our modern life for good
and ill.

I thought about this as I read (and reread and reread) “The


Early Cretaceous” to my dinosaur-obsessed 4-year-old
grandson, Augie. Unlike Augie, I just couldn’t concentrate
on the exact relationship between the Deinonychus and the
Carcharodontosaurus. My eyelids drooped and my mind
began to wander. What would an alien biologist make of
the history of life on Earth across time...?

Dept. of Biology, University of Proxima Centauri

Terran Excursion Field Report

[75,000 B.C.]: I have tragic news. For the last hundred million
years I’ve returned to this planet periodically to study the
most glorious and remarkable organisms in the universe—
the dinosaurs. And they are gone! A drastic climate change
has driven them to extinction.

Nothing interesting is left on this planet. A few animals have


developed the ability to use tools, think abstractly and
employ complicated vocal signals. But however clever the
birds may be, they are still puny compared with their dinosaur
ancestors.

A few species of scruffy primates also use tools and pass on


information to their young. But it’s hard to tell these species
apart. Some are preoccupied with politics [early chimp
ancestors?], while others mostly seem to care about
engaging in as much energetic and varied sex as possible
[early bonobos?]. And then there are the ones who are
preoccupied with politics, sex and trinkets like shell
necklaces [that would be us].

But there is nothing left even remotely as interesting as a


giganotosaurus.

[A.D. 2016]: Something may be happening on Earth. Judging


from radio signals, there is now some form of intelligent
civilization, a few traces of science and art—although most
of the signals are still about politics, sex and trinkets. I can’t
really imagine how any of those primitive primate species
could have changed so much, so quickly, but it might be
worth accelerating my next visit.

[A.D. 500,000]: Good news! After another dramatic climate


change and a burst of radiation, the primates are gone, an
evolutionary eye-blink compared with the 180-million-year
dinosaur dynasty.

But that extinction made room for the crows to really


flourish. They have combined superior intelligence and the
insane cool of their dinosaur ancestors. Those earlier radio
signals must have been coming from them, and the planet is
now full of the magnificent civilization of the Early
Corvidaceous. I look forward to my next visit….

As the book drops from my hands, I shake myself awake—


and hold on to Augie a little tighter.

THE PSYCHOPATH, THE ALTRUIST AND THE REST OF US

One day in 2006, Paul Wagner donated one of his kidneys


to a stranger with kidney failure. Not long before, he had
been reading the paper on his lunch break at a Philadelphia
company and saw an article about kidney donation. He
clicked on the website and almost immediately decided to
donate.

One day in 2008, Scott Johnson was sitting by a river in


Michigan, feeling aggrieved at the world. He took out a gun
and killed three teenagers who were out for a swim. He
showed no remorse or guilt—instead, he talked about how
other people were always treating him badly. In an
interview, Mr. Johnson compared his killing spree to spilling
a glass of milk.

These events were described in two separate, vivid articles


in a 2009 issue of the New Yorker. Larissa MacFarquhar,
who wrote about Mr. Wagner, went on to include him in her
wonderful recent book about extreme altruists, “Strangers
Drowning.”

For most of us, the two stories are so fascinating because


they seem almost equally alien. It’s hard to imagine how
someone could be so altruistic or so egotistic, so kind or so
cruel.

The neuroscientist Abigail Marsh at Georgetown University


started out studying psychopaths—people like Scott
Johnson. There is good scientific evidence that
psychopaths are very different from other kinds of
criminals. In fact, many psychopaths aren’t criminals at all.
They can be intelligent and successful and are often
exceptionally charming and charismatic.

Psychopaths have no trouble understanding how other


people’s minds work; in fact, they are often very good at
manipulating people. But from a very young age, they don’t
seem to respond to the fear or distress of others.

Psychopaths also show distinctive patterns of brain


activity. When most of us see another person express fear
or distress, the amygdala—a part of our brain that is
important for emotion—becomes particularly active. That
activity is connected to our immediate, intuitive impulse to
help. The brains of psychopaths don’t respond to someone
else’s fear or distress in the same way, and their amygdalae
are smaller overall.

But we know much less about extreme altruists like Paul


Wagner. So in a study with colleagues, published in 2014 in
Proceedings of the National Academy of Sciences, Dr.
Marsh looked at the brain activity of people who had
donated a kidney to a stranger. Like Mr. Wagner, most of
these people said that they had made the decision
immediately, intuitively, almost as soon as they found out
that it was possible.

The extreme altruists showed exactly the opposite pattern


from the psychopaths: The amygdalae of the altruists were
larger than normal, and they activated more in response to
a face showing fear. The altruists were also better than
typical people at detecting when another person was
afraid.

These brain studies suggest that there is a continuum in


how we react to other people, with the psychopaths on one
end of the spectrum and the saints at the other. We all see
the world from our own egotistic point of view, of course.
The poet Philip Larkin once wrote: “Yours is the harder
course, I can see. On the other hand, mine is happening to
me.”

But for most of us, that perspective is extended to include


at least some other people, though not all. We see fear or
distress on the faces of those we love, and we immediately,
intuitively, act to help. No one is surprised when a mother
donates her kidney to her child.

The psychopath can’t seem to feel anyone’s needs except


his own. The extreme altruist feels everybody’s needs. The
rest of us live, often uneasily and guiltily, somewhere in the
middle.

YOUNG MICE, LIKE CHILDREN, CAN GROW UP TOO FAST

Is it good to grow up? We often act as if children should


develop into adults as quickly as possible. More and more
we urge our children to race to the next level, leap over the
next hurdle, make it to the next grade as fast as they can.
But new brain studies suggest that it may not be good to
grow up so fast. The neuroscientist Linda Wilbrecht at my
own school, the University of California, Berkeley, and her
collaborators recently reported that early stress makes
babies, at least baby mice, grow up too soon.

In an experiment published in 2011 in the journal


Developmental Cognitive Neuroscience, Dr. Wilbrecht and a
colleague discovered that young mice learn more flexibly
than older ones. The researchers hid food at one of four
locations in a pile of shavings, with each location indicated
by a different smell. The mice quickly learned that the food
was at the spot that smelled, say, like thyme rather than
cloves, and they dug in the shavings to find their meal. The
experimenters then reversed the scents: Now the clove-
scented location was the correct one.

To solve this problem the mice had to explore a new


possibility: They had to dig at the place with the other
smell, just for the heck of it, without knowing whether they
would find anything. Young mice were good at this kind of
exploratory, flexible “reversal learning.”

But at a distinct point, just as they went from being


juveniles to adults, they got worse at solving the problem.
Instead, they just kept digging at the spot where they had
found the food before. The experiment fit with earlier
studies: Like mice, both young rats and young children
explore less as they become adults.

The change happened when the mouse was between 26


and 60 days old, and it was connected to specific brain
changes. Life is compressed for mice, so the time between
26 days and 60 days is like human adolescence.

In the new experiment, published in 2015 in the same


journal, the researchers looked at how the young mice
reacted to early stress. Some of the mice were separated
from their mothers for 60 or 180 minutes a day, although
the youngsters were kept warm and fed just like the other
mice. Mice normally get all their care from their mother, so
even this brief separation is very stressful.

The stressed mice actually developed more quickly than


the secure mice. As adolescents they looked more like
adults: They were less exploratory and flexible, and not as
good at reversal learning. It seemed that they grew up too
fast. And they were distinctive in another way. They were
more likely to drink large quantities of ethanol—thus, more
vulnerable to the mouse equivalent of alcoholism.

These results fit with an emerging evolutionary approach to


early stress. Childhood is a kind of luxury, for mice as well
as men, a protected period in which animals can learn,
experiment and explore, while caregivers look after their
immediate needs.

Early stress may act as a signal to animals that this special


period is not a luxury that they can afford—they are in a
world where they can’t rely on care. Animals may then
adopt a “live fast, die young” strategy, racing to achieve
enough adult competence to survive and reproduce, even
at the cost of less flexibility, fewer opportunities for
learning and more vulnerability to alcohol.

This may be as true for human children as it is for mouse


pups. Early life stress is associated with earlier puberty,
and a 2013 study by Nim Tottenham and colleagues in the
Proceedings of the National Academy of Sciences found
that children who spent their early years in orphanages
prematurely developed adultlike circuitry in the parts of the
brain that govern fear and anxiety.

Care gives us the gift of an unhurried childhood.

HOW BABIES KNOW THAT ALLIES CAN MEAN POWER

This year, in elections all across the country, individuals will


compete for various positions of power. The one who gets
more people to support him or her will prevail.

Democratic majority rule, the idea that the person with


more supporters should win, may be a sophisticated and
relatively recent political invention. But a new study in the
Proceedings of the National Academy of Sciences
suggests that the idea that the majority will win is much
deeper and more fundamental to our evolution.

Andrew Scott Baron and colleagues at the University of


British Columbia studied some surprisingly sophisticated
political observers and prognosticators. It turns out that
even 6-month-old babies predict that the guy with more
allies will prevail in a struggle. They are pundits in diapers.

How could we possibly know this? Babies will look longer


at something that is unexpected or surprising.
Developmental researchers have exploited this fact in very
clever ways to figure out what babies think. In the Scott
Baron study, the experimenters showed 6- to 9-month-old
babies a group of three green simplified cartoon characters
and two blue ones (the colors were different on different
trials).

Then they showed the babies a brief cartoon of one of the


green guys and one of the blue guys trying to cross a
platform that only had room for one character at a time,
like Robin Hood and Little John facing off across a single
log bridge. Which character would win and make it across
the platform?

The babies looked longer when the blue guy won. They
seemed to expect that the green guy, the guy with more
buddies, would win, and they were surprised when the guy
from the smaller group won instead.

In a 2011 study published in the journal Science, Susan


Carey at Harvard and her colleagues found that 9-month-
olds also think that might makes right: The babies
expected that a physically bigger character would win out
over a smaller one. But the new study showed that babies
also think that allies are even more important than mere
muscle. The green guy and the blue guy on the platform
were the same size. And the green guy’s allies were
actually a little smaller than the blue guy’s friends. But the
babies still thought that the character who had two friends
would win out over the character who had just one, even if
those friends were a bit undersized.
What’s more, the babies only expected the big guys to win
once they were about 9 months old. But they already
thought the guy with more friends would win when they
were just 6 months old.

This might seem almost incredible: Six-month-olds, after


all, can’t sit up yet, let alone caucus or count votes. But the
ability may make evolutionary sense. Chimpanzees, our
closest primate relatives, have sophisticated political skills.
A less powerful chimp who calls on several other chimps
for help can overthrow even the most ferociously
egocentric silverback. Our human ancestors made
alliances, too. It makes sense that even young babies are
sensitive to the size of social groups and the role they play
in power.

We often assume that politics is a kind of abstract


negotiation between autonomous individual interests
—voters choose candidates because they think those
candidates will enact the policies they want. But the new
studies of the baby pundits suggest a different picture.
Alliance and dominance may be more fundamental human
concepts than self-interest and negotiation. Even grown-up
voters may be thinking more about who belongs to what
group, or who is top dog, than who has the best health-care
plan or tax scheme.

TO CONSOLE A VOLE: A RODENT CARES FOR OTHERS

One day when I was a young professor, a journal rejected a


paper I had submitted, a student complained about a grade,
and I came home to find that I had forgotten to take
anything out for dinner. Confronted by this proof of my
complete failure as both an academic and a mother, I
collapsed on the sofa in tears. My son, who was just 2,
looked concerned and put his arms around me. Then he ran
and got a box of Band-Aids.
Even 2-year-olds will go out of their way to console
someone in distress—it’s a particularly touching
demonstration of basic human kindness. But is kindness
exclusively human? A new study in the journal Science by
J.P. Burkett at Emory University and colleagues suggests
that consolation has deep evolutionary roots and is
connected to empathy and familial love.

Chimpanzees, wolves, crows and elephants will console


another animal. When one chimp attacks another, for
instance, a third party will often try to comfort the victim.
But these are all very intelligent animals with complex
social structures. Do you need to be smart to be kind?

The new study looked at two species of voles—not terribly


smart rodents with a relatively simple social structure. One
species, the prairie vole, has strong “pair bonds.” The
mother and father voles raise babies together, and they are
“socially monogamous”—that is, they mostly mate and
spend time with each other instead of other voles. The
other species, the meadow vole, is similar to its prairie
cousin in many ways, including general intelligence, but
meadow voles don’t pair-bond, and the males don’t take
care of babies. Do you need to be smart to be kind?

In the Burkett experiment, one vole behind a transparent


barrier watched another either undergoing stress or just
sitting there. Then the glass was removed.

A prairie-vole observer would provide more licking and


grooming—the vole equivalent of a warm hug—if the other
vole had been stressed. This consoled the stressed prairie
vole, which became less anxious. The meadow voles, by
contrast, didn’t console each other.

The prairie voles seemed to empathize with their stressed


companions. When a prairie vole saw another in trouble, it
got anxious itself, and its stress hormones increased. (You
might wonder if grooming others is just an automatic
reaction to stress, direct or indirect. But the voles who
experienced stress directly didn’t try to groom the
observers.)

In voles, and probably in humans too, the hormone oxytocin


plays a big role in social bonds. In many mammals,
oxytocin helps to create the bond between mothers and
babies. Oxytocin spikes when you go into labor. But in a
few species, including prairie voles and us (but not,
apparently, meadow voles), this system has been co-opted
to make adults love each other, too. When the researchers
administered a chemical that blocks oxytocin, the prairie
voles stopped consoling others.

Providing consolation didn’t seem to depend on gender,


and the prairie voles consoled familiar voles, not just mates
or relations. But, in another parallel to humans, the voles
didn’t reach out to strangers.

Of course, humans, even at 2, understand others more


deeply than voles do, and so can provide consolation in
more complicated ways. And chimpanzees and bonobos,
who do console, don’t pair-bond, though they do have other
strong social bonds. (Bonobos often console others by
having sex with them, including homosexual sex.)

But the new study does suggest that the virtue of kindness
comes more from the heart than the mind, and that it is
rooted in the love of parents and children.

SCIENCE IS STEPPING UP THE PACE OF INNOVATION

Every year on the website Edge, scientists and other


thinkers reply to one question. This year it’s “What do you
consider the most interesting recent news” in science? The
answers are fascinating. We’re used to thinking of news as
the events that happen in a city or country within a few
weeks or months. But scientists expand our thinking to the
unimaginably large and the infinitesimally small.

Despite this extraordinary range, the answers of the Edge


contributors have an underlying theme. The biggest news
of all is that a handful of large-brained primates on an
insignificant planet have created machines that let them
understand the world, at every scale, and let them change it
too, for good or ill.

Here is just a bit of the scientific news. The Large Hadron


Collider—the giant particle accelerator in Geneva—is finally
fully functional. So far the new evidence from the LHC has
mostly just confirmed the standard model of physics,
which helps explain everything from the birth of time to the
end of the world. But at the tiny scale of the basic particles
it is supposed to investigate, the Large Hadron Collider has
detected a small blip—something really new may just be
out there.

Our old familiar solar system, though, has turned out to be


full of surprises. Unmanned spacecraft have discovered
that the planets surrounding us are more puzzling, peculiar
and dynamic than we would ever have thought. Mars once
had water. Pluto, which was supposed to be an inert lump,
like the moon, turns out to be a dynamic planet full of
glaciers of nitrogen.

On our own planet, the big, disturbing news is that the


effects of carbon on climate change are ever more evident
and immediate. The ice sheets are melting, sea levels are
rising, and last year was almost certainly the warmest on
record. Our human response is achingly slow in contrast.

When it comes to all the living things that inhabit that


planet, the big news is the new Crispr gene-editing
technology. The technique means that we can begin to
rewrite the basic genetic code of all living beings—from
mosquitoes to men.

The news about our particular human bodies and their ills
is especially interesting. The idea that tiny invisible
organisms make us sick was one of the great triumphs of
the scientific expansion of scale. But new machines that
detect the genetic signature of bacteria have shown that
those invisible germs—the “microbiome”—aren’t really the
enemy. In fact, they’re essential to keeping us well, and the
great lifesaving advance of antibiotics comes with a cost.

The much more mysterious action of our immune system


is really the key to human health, and that system appears
to play a key role in everything from allergies to obesity to
cancer.

If new technology is helping us to understand and mend


the human body, it is also expanding the scope of the
human mind. We’ve seen lots of media coverage about
artificial intelligence over the past year, but the basic
algorithms are not really new. The news is the sheer
amount of data and computational power that is available.

Still, even if those advances are just about increases in


data and computing power, they could profoundly change
how we interact with the world. In my own contribution to
answering the Edge question, I talked about the fact that
toddlers are starting to interact with computers and that
the next generation will learn about computers in a
radically new way.

From the Large Hadron Collider to the Mars Rover, from


Crispr to the toddler’s iPad, the news is that technologies
let us master the universe and ourselves and reshape the
planet. What we still don’t know is whether, ultimately,
these developments are good news or bad.

GIVING THANKS FOR THE INNOVATION THAT SAVES


BABIES

Every year at this time, I have a special reason to be


thankful. My bright, pretty, granddaughter Georgiana turned
2 a few days ago. She’s completely healthy—and
accomplished too: She can sing most of “Twinkle, Twinkle
Little Star” and does an impressive cow imitation.

I’m especially thankful for these ordinary joys, because


Georgie has a small genetic mutation that leads to a
condition called Congenital Melanocytic Nevus. The main
symptom is a giant mole, or nevus, that covers much of her
back. CMN also puts children at risk for other problems
including skin cancer and brain damage. As Francis Bacon
said, “He that hath wife and children hath given hostages to
fortune.” But a child with CMN, or myriad other genetic
conditions, makes you especially aware of how infinitely
vulnerable and infinitely valuable all children are.

Georgie makes me grateful for other human gifts, too. Right


now CMN can’t be cured. But the distinctively human
adventurers we call scientists are perpetually exploring the
frontiers of knowledge.

My colleague Jennifer Doudna at the University of


California, Berkeley has helped develop a technique called
CRISPR for modifying genes (the latest paper about it is in
the new issue of Science). Like many breakthroughs, the
discovery was an unexpected side effect of pure, basic
science research—that quintessential expression of our
human curiosity. The new techniques have risks: Dr.
Doudna has argued that we need to strictly regulate this
research. But they also hold the promise of treating genetic
conditions like CMN.

This Thanksgiving I’m thankful for another miracle and


another amazing, distinctively human ability that allowed it
to take place. Atticus, Georgie’s little brother, was born
three months ago. Attie’s mom had preeclampsia, a
dangerous spike in blood pressure during pregnancy. The
only treatment is delivery, so Attie was delivered a month
early and spent a couple of days in the Neonatal Intensive
Care Unit. After a few months of constant eating, he is as
plump and bright-eyed a baby as you could ask for.

The banality of the phrase “spent a couple of days in the


NICU” captures the other human gift. Attie’s treatment
seemed completely routine and unremarkable. The nurses
put him in an incubator with a breathing tube, no innovation
or genius required. But a hundred years ago the idea that a
premature baby could thrive was as cutting-edge as gene
therapy seems now. (Lady Sybil on the series “Downton
Abbey” dies of preeclampsia.)

The first early 20th-century incubators were so futuristic


that they were exhibited at Luna Park in Coney Island. In
the 1930s scientists discovered how to give babies oxygen.
But as with CRISPR now, there were perils—at first doctors
used too much oxygen, which blinded thousands of babies.
Further scientific advances mean that today a premature
baby who once had a 95% chance of dying now has a 95%
chance of living.

Blind, random biological forces created Georgie’s CMN. But


those same forces also created a new species, homo
sapiens, who could do two things especially well. Humans
could innovate—discovering features of the world and
inventing tools to change it. And they could imitate: Each
generation could quickly and easily take on the discoveries
of the last. This combination means that innovators like Dr.
Doudna can make discoveries that become just ordinary
for the next generation. So today I am profoundly grateful
for both the rare scientific genius that gives hope for
babies like Georgie and the commonplace medical routine
that saves babies like Attie.

WHO WAS THAT GHOST? SCIENCE'S REASSURING REPLY

It’s midnight on Halloween. You walk through a deserted


graveyard as autumn leaves swirl around your feet.
Suddenly, inexplicably and yet with absolute certainty, you
feel an invisible presence by your side. Could it be a ghost?
A demon? Or is it just an asynchrony in somato-sensory
motor integration in the frontoparietal cortex?

A 2014 paper in the journal Current Biology by Olaf Blanke


at the University Hospital of Geneva and his colleagues
supports the last explanation. For millennia people have
reported vividly experiencing an invisible person nearby.
The researchers call it a “feeling of presence.” It can
happen to any of us: A Pew research poll found that 18% of
Americans say they have experienced a ghost.

But patients with particular kinds of brain damage are


especially likely to have this experience. The researchers
found that specific areas of these patients’ frontoparietal
cortex were damaged—the same brain areas that let us
sense our own bodies.

Those results suggested that the mysterious feeling of


another presence might be connected to the equally
mysterious feeling of our own presence—that absolute
certainty that there is an “I” living inside my body. The
researchers decided to try to create experimentally the
feeling of presence. Plenty of people without evident brain
damage say they have felt a ghost was present. Could the
researchers systematically make ordinary people
experience a disembodied spirit?

They tested 50 ordinary, healthy volunteers. In the


experiment, you stand between two robots and touch the
robot in front of you with a stick. That “master” robot sends
signals that control the second “slave” robot behind you.
The slave robot reproduces your movements and uses
them to control another stick that strokes your back. So
you are stroking something in front of you, but you feel
those same movements on your own back. The result is a
very strong sense that somehow you are touching your own
back, even though you know that’s physically impossible.
The researchers have manipulated your sense of where
your self begins and ends.

Then the researchers changed the set-up just slightly. Now


the slave robot touches your back half a second after you
touch the master robot, so there is a brief delay between
what you do and what you feel. Now people in the
experiment report a “feeling of presence”: They say that
somehow there is an invisible ghostly person in the room,
even though that is also physically impossible.

If we put that result together with the brain-damage


studies, it suggests an intriguing possibility. When we
experience ghosts and spirits, angels and demons, we are
really experiencing a version of ourselves. Our brains
construct a picture of the “I” peering out of our bodies, and
if something goes slightly wrong in that process—because
of brain damage, a temporary glitch in normal brain
processing or the wiles of an experimenter—we will
experience a ghostly presence instead.

So, in the great “Scooby-Doo” tradition, we’ve cleared up the


mystery, right? The ghost turned out just to be you in
disguise? Not quite. All good ghost stories have a twist,
what Henry James called “The Turn of the Screw.” The
ghost in the graveyard was just a creation of your brain. But
the “you” who met the ghost was also just the creation of
your brain. In fact, the same brain areas that made you feel
someone else was there are the ones that made you feel
that you were there too.

If you’re a good, hard-headed scientist, it’s easy to accept


that the ghost was just a Halloween illusion, fading into the
mist and leaves. But what about you, that ineffable,
invisible self who inhabits your body and peers out of your
eyes? Are you just a frontoparietal ghost too? Now that’s a
really scary thought.

NO, YOUR CHILDREN AREN'T BECOMING DIGITAL


ZOMBIES

The other day, a newspaper writer joined the chorus of


angry voices about the bad effects of new technology.
“There can be no rational doubt that [it] has caused vast
injury.” It is “superficial, sudden, unsifted, too fast for the
truth.” The day was in 1858, and the quote was about the
telegraph. Similarly, the telephone, radio and television have
each in turn been seen as a source of doom.

Mobile devices like smartphones are just the latest


example. Parents fear that they will make teenagers more
socially alienated and disconnected—worries echoed and
encouraged by many journalists and writers. Since all those
earlier technologies turned out to be relatively harmless,
why are we so afraid?

One reason may be the inevitable lag between when a


technology emerges and when we can assess its
consequences. That’s especially true when we’re
concerned about a new technology’s influence on the next
generation. Meanwhile, anxiety and anecdotes proliferate.
An old saw in statistics is that the plural of anecdote is not
“data”—but it may well be “op-ed article.”

In a paper coming out in November in the journal


Perspectives in Psychological Science, Madeleine George
and Candice Odgers at Duke University review the scientific
evidence we have so far about the effects of social media
on adolescents. They found that teenagers are indeed
pervasively immersed in the digital world. In one survey,
teenagers sent an average of 60 texts a day, and 78% of
them owned an Internet-connected mobile phone.

But the researchers also found little evidence to support


parents’ fears. Their conclusion is that teenagers’
experience in the mobile world largely parallels rather than
supplants their experience in the physical world. Teenagers
mostly use mobile devices to communicate with friends
they already know offline. They can have bad online
experiences, but they are the same sort of bad experiences
they have offline.

Two large-scale surveys done in 2007 and 2013 in the


Netherlands and Bermuda, involving thousands of
adolescents, found that teenagers who engaged in more
online communication also reported more and better
friendships, and smaller studies bear this out as well. There
is no sign at all that friendships have changed or suffered
as mobile use has risen.

The George-Odgers team cataloged parents’ biggest fears


about mobile technology—that it might alienate children
from parents, for example, or make them vulnerable to
strangers—and found little evidence to support them. The
researchers did find that screens seem to have a genuinely
disruptive effect on sleep, whether caused by the effects of
LED light or by social excitement. So it’s a good idea to
keep your children (and yourself) from stowing the phone
bedside.
The researchers also emphasize the inevitable limitations
of existing studies, since these issues have arisen so
recently. But the research we do have already is more
reassuring than alarming.

So why, despite the research and all those failed previous


prophecies of technological doom, are people so
convinced that this time it’s different? One reason may be
that the experience of technological and cultural change
differs for adults and children. Psychologists call it “the
cultural ratchet.” Each generation of children easily and
automatically takes on all the technologies of the previous
generations. But each generation also introduces
technologies, and mastering them as an adult is much
more difficult.

The latest click of the ratchet is so vivid that the


regularities of history are hard to see. Inevitably, the year
before you were born looks like Eden, and the year after
your children were born looks like “Mad Max.”

IS OUR IDENTITY IN INTELLECT, MEMORY OR MORAL


CHARACTER?

This summer my 93-year-old mother-in-law died, a few


months after her 94–year-old husband. For the last five
years she had suffered from Alzheimer’s disease. By the
end she had forgotten almost everything, even her
children’s names, and had lost much of what defined her—
her lively intelligence, her passion for literature and history.

Still, what remained was her goodness, a characteristic


warmth and sweetness that seemed to shine even more
brightly as she grew older. Alzheimer’s can make you feel
that you’ve lost the person you loved, even though they’re
still alive. But for her children, that continued sweetness
meant that, even though her memory and intellect had
gone, she was still Edith.

A new paper in Psychological Science reports an


interesting collaboration between the psychologist Nina
Strohminger at Yale University and the philosopher Shaun
Nichols at the University of Arizona. Their research
suggests that Edith was an example of a more general and
rather surprising principle: Our identity comes more from
our moral character than from our memory or intellect.

Neurodegenerative diseases like Alzheimer’s make


especially vivid a profound question about human nature.
In the tangle of neural connections that make up my brain,
where am I? Where was Edith? When those connections
begin to unravel, what happens to the person?

Many philosophers have argued that our identity is rooted


in our continuous memories or in our accumulated
knowledge. Drs. Strohminger and Nichols argue instead
that we identify people by their moral characteristics, their
gentleness or kindness or courage—if those continue, so
does the person. To test this idea the researchers
compared different kinds of neurodegenerative diseases in
a 248-participant study. They compared Alzheimer’s
patients to patients who suffer from fronto-temporal
dementia, or FTD.

FTD is the second most common type of dementia after


Alzheimer’s, though it affects far fewer people and usually
targets a younger age group. Rather than attacking the
memory areas of the brain, it damages the frontal control
areas. These areas are involved in impulse control and
empathy—abilities that play a particularly important role in
our moral lives.

As a result, patients may change morally even though they


retain memory and intellect. They can become indifferent
to other people or be unable to control the impulse to be
rude. They may even begin to lie or steal.

Finally, the researchers compared both groups to patients


with amyotrophic lateral sclerosis, or ALS, who gradually
lose motor control but not other capacities. (Physicist
Stephen Hawking suffers from ALS.)

The researchers asked spouses or children caring for


people with these diseases to fill out a questionnaire about
how the patients had changed, including changes in
memory, cognition and moral behavior. They also asked
questions like, “How much do you sense that the patient is
still the same person underneath?” or, “Do you feel like you
still know who the patient is?”

The researchers found that the people who cared for the
FTD patients were much more likely to feel that they had
become different people than the caregivers of the
Alzheimer’s patients. The ALS caregivers were least likely
to feel that the patient had become a different person.
What’s more, a sophisticated statistical analysis showed
that this was the effect of changes in the patient’s moral
behavior in particular. Across all three groups, changes in
moral behavior predicted changes in perceived identity,
while changes in memory or intellect did not.

These results suggest something profound. Our moral


character, after all, is what links us to other people. It’s the
part of us that goes beyond our own tangle of neurons to
touch the brains and lives of others. Because that moral
character is central to who we are, there is a sense in which
Edith literally, and not just metaphorically, lives on in the
people who loved her.

BABIES MAKE PREDICTIONS, TOO


In “The Adventure of Silver Blaze,” about a valuable
racehorse that mysteriously disappears, Sherlock Holmes
tells the hapless Detective Gregory to note the curious
incident of the dog in the nighttime. But, says Gregory, the
dog did nothing in the nighttime. That was the curious
incident, Holmes replies—the dog didn’t bark on the night of
the crime, as you would expect. A new study suggests that
as babies start to figure out the world, they think a lot like
Sherlock.

People often say that babies are “sponges.” The metaphor


reflects a common picture of how the brain works:
Information floods into our eyes and ears and soaks into
our brains, gradually becoming more abstract and complex.
This image of the brain is vividly captured in the “abstract
thought zone” of the recent animated Pixar movie “Inside
Out”—where three-dimensional experiences are
transformed into flat cubist ideas.

But a very different picture, called “predictive coding,” has


been making a big splash in neuroscience lately. This
picture says that most of the action in the brain comes
from the top down. The brain is a prediction machine. It
maintains abstract models of the world, and those abstract
models generate predictions about what we will see and
hear. The brain keeps track of how well those predictions fit
with the actual information coming into our eyes and ears,
and it notes discrepancies.

If I see something that I didn’t predict, or if I don’t see


something that I did predict, my brain kicks into action. I
modify my abstract model of the world and start the
process over.

If “predictive coding” is right, we’re designed to perceive


what doesn’t happen as much as what does. That may
sound bizarre, but think about an Alfred Hitchcock movie.
You are riveted by a scene where absolutely nothing is
happening, because you are expecting the killer to pounce
at any second. Or think about a man who lives by the train
tracks and wakes up with a start when the train doesn’t
come by on time.

In fact, studies show that your brain responds to the things


that don’t happen, as well as those that do. If we expect to
see something and it doesn’t appear, the visual part of our
brain responds. That makes sense for adults, with all our
massive accumulated learning and experience. But how
about those baby sponges?

In a new paper in the Proceedings of the National Academy


of Sciences, Richard Aslin of the University of Rochester
and colleagues report on a new study of 6-month-old
babies’ brain activity. They used a technique called NIRS, or
Near Infrared Spectrometry. It records whether the brain is
active in the occipital area, where visual pictures are
processed, or in the temporal area, where sounds go.

In a control experiment, babies just heard a honking sound


or saw a cartoon face emerge on a screen. Sure enough,
the visual area lit up when the babies saw the face but not
when they heard the sound.

Then, with another group of babies, the experimenters


repeatedly played the honk and showed the image of the
face right afterward. The babies started to predict that the
face would show up when they heard the sound.

That’s when the experimenters arranged for something


unexpected to happen: The babies heard the honking
sound, but the face did not then appear.

If the babies were just sponges, nothing special should


happen in their brains; after all, nothing had happened in
the outside world. But if they were using predictive coding,
they should respond to the unexpected event. And that’s
what happened—the visual area lit up when the babies
didn’t see the picture they had expected. In fact, it activated
just as much as when the babies actually did see the
picture.

In this way, the babies were more like scientists than like
sponges. Even 6-month-olds, who can’t crawl or babble yet,
can make predictions and register whether the predictions
come true, as the predictive coding picture would suggest.

It turns out that baby brains are always on the lookout for
the curious incident of the dog who did nothing. Each one
is a little Sherlock Holmes in the making.

AGGRESSION IN CHILDREN MAKES SENSE - SOMETIMES

Walk into any preschool classroom and you’ll see that


some 4-year-olds are always getting into fights—while
others seldom do, no matter the provocation. Even siblings
can differ dramatically—remember Cain and Abel. Is it
nature or nurture that causes these deep differences in
aggression?

The new techniques of genomics—mapping an organism’s


DNA and analyzing how it works—initially led people to
think that we might find a gene for undesirable individual
traits like aggression. But from an evolutionary point of
view, the very idea that a gene can explain traits that vary
so dramatically is paradoxical: If aggression is
advantageous, why didn’t the gene for aggression spread
more widely? If it’s harmful, why would the gene have
survived at all?

Two new studies suggest that the relationship between


genes and aggression is more complicated than a mere
question of nature vs. nurture. And those complications
may help to resolve the evolutionary paradox.

In earlier studies, researchers looked at variation in a gene


involved in making brain chemicals. Children with a version
of the gene called VAL were more likely to become
aggressive than those with a variation called MET. But this
only happened if the VAL children also experienced
stressful events like abuse, violence or illness. So it
seemed that the VAL version of the gene made the children
more vulnerable to stress, while the MET version made
them more resilient.

A study published last month in the journal Developmental


Psychology, by Beate Hygen and colleagues from the
Norway University of Science and Technology and Jay
Belsky of U.C. Davis, found that the story was even more
complicated. They analyzed the genes of hundreds of
Norwegian 4-year-olds. They also got teachers to rate how
aggressive the children were and parents to record whether
the children had experienced stressful life events.

As in the earlier studies, the researchers found that children


with the VAL variant were more aggressive when they were
subjected to stress. But they also found something else:
When not subjected to stress, these children were actually
less aggressive than the MET children.

Dr. Belsky has previously used the metaphor of orchids and


dandelions to describe types of children. Children with the
VAL gene seem to be more sensitive to the environment,
for good and bad, like orchids that can be magnificent in
some environments but wither in others. The MET children
are more like dandelions, coming out somewhere in the
middle no matter the conditions.

Dr. Belsky has suggested that this explanation for


individual variability can help to resolve the evolutionary
puzzle. Including both orchids and dandelions in a group of
children gives the human race a way to hedge its
evolutionary bets. A study published online in May in the
journal Developmental Science, by Dr. Belsky with Willem
Frankenhuis and Karthik Panchanathan, used
mathematical modeling to explore this idea more precisely.

If a species lives in a predictable, stable environment, then


it would be adaptive for their behavior to fit that
environment as closely as possible. But suppose you live in
an environment that changes unpredictably. In that case,
you might want to diversify your genetic portfolio. Investing
in dandelions is like putting your money in bonds: It’s safe
and reliable and will give you a constant, if small, return in
many conditions.

Investing in orchids is higher risk, but it also promises


higher returns. If conditions change, then the orchids will
be able to change with them. Being mean might sometimes
pay off, but only when times are tough. Cooperation will be
more valuable when resources are plentiful. The risk is that
the orchids may get it wrong—a few stressful early
experiences might make a child act as if the world is hard,
even when it isn’t. In fact, the model showed that when
environments change substantially over time, a mix of
orchids and dandelions is the most effective strategy.

We human beings perpetually redesign our living space and


social circumstances. By its very nature, our environment is
unpredictable. That may be why every preschool class has
its mix of the sensitive and the stolid.

SMARTER EVERY YEAR? MYSTERY OF THE RISING IQS

Are you smarter than your great-grandmom? If IQ really


measures intelligence, the answer is probably a resounding
“yes.”

IQ tests are “normed”: Your score reflects how you did


compared with other people, like being graded on a curve.
But the test designers, who set the average in a given year
at 100, have to keep “renorming” the tests, adding harder
questions. That’s because absolute performance on IQ
tests—the actual number of questions people get right—
has greatly improved over the last 100 years. It’s called the
Flynn effect, after James Flynn, a social scientist at New
Zealand’s University of Otago who first noticed it in the
1980s.

How robust is the Flynn effect, and why has it happened? In


the journal Perspectives in Psychological Science, Jakob
Pietschnig and Martin Voracek at the University of Vienna
report a new “meta-analysis.” They combined data from
271 studies, done from 1909 to 2013, testing almost four
million people from 31 countries on six continents. Most of
the test questions remain the same over time, so the
scientists could look at how much people improved from
decade to decade.

They found that the Flynn effect is real—and large. The


absolute scores consistently improved for children and
adults, for developed and developing countries. People
scored about three points more every decade, so the
average score is 30 points higher than it was 100 years
ago.

The speed of the rise in scores varied in an interesting way.


The pace jumped in the 1920s and slowed down during
World War II. The scores shot up again in the postwar
boom and then slowed down again in the ’70s. They’re still
rising, but even more slowly. Adult scores climbed more
than children’s.
Why? There are a lot of possible explanations, and Drs.
Pietschnig and Voracek analyze how well different theories
fit their data. Genes couldn’t change that swiftly, but better
nutrition and health probably played a role. Still, that can’t
explain why the change affected adults’ scores more than
children’s. Economic prosperity helped, too—IQ increases
correlate significantly with higher gross domestic product.

The fact that more people go to school for longer likely


played the most important role—more education also
correlates with IQ increases. That could explain why adults,
who have more schooling, benefited most. Still, the Flynn
effect has a strong impact on young children. Education,
just by itself, doesn’t seem to account for the whole effect.

The best explanation probably depends on some


combination of factors. Dr. Flynn himself argues for a
“social multiplier” theory. An initially small change can set
off a benign circle that leads to big effects. Slightly better
education, health, income or nutrition might make a child
do better at school and appreciate learning more. That
would motivate her to read more books and try to go to
college, which would make her even smarter and more
eager for education, and so on.

“Life history” is another promising theory. A longer period


of childhood correlates with better learning abilities across
many species. But childhood is expensive: Someone has to
take care of those helpless children. More resources,
nutrition or health allow for a longer childhood, while more
stress makes childhood shorter. Education itself really
amounts to a way of extending the childhood learning
period.

But are you really smarter than your great-grandmom? IQ


tests measure the kind of intelligence that makes you do
well at school. If we had tests for the kind of intelligence
that lets you run a farm, raise eight children and make
perfect biscuits on a smoky wood stove, your great-
grandmom might look a lot smarter than you do.

The thing that really makes humans so smart, throughout


our history, may be that we can invent new kinds of
intelligence to suit our changing environments.

BRAINS, SCHOOLS AND A VICIOUS CYCLE OF POVERTY

A fifth or more of American children grow up in poverty,


with the situation worsening since 2000, according to
census data. At the same time, as education researcher
Sean Reardon has pointed out, an “income achievement
gap” is widening: Low-income children do much worse in
school than higher-income children.

Since education plays an ever bigger role in how much we


earn, a cycle of poverty is trapping more American children.
It’s hard to think of a more important project than
understanding how this cycle works and trying to end it.

Neuroscience can contribute to this project. In a new study


in Psychological Science, John Gabrieli at the
Massachusetts Institute of Technology and his colleagues
used imaging techniques to measure the brains of 58 14-
year-old public school students. Twenty-three of the
children qualified for free or reduced-price lunch; the other
35 were middle-class.

The scientists found consistent brain differences between


the two groups. The researchers measured the thickness
of the cortex—the brain’s outer layer—in different brain
areas. The low-income children had developed thinner
cortices than the high-income children.

The low-income group had more ethnic and racial


minorities, but statistical analyses showed that ethnicity
and race were not associated with brain thickness,
although income was. Children with thinner cortices also
tended to do worse on standardized tests than those with
thicker ones. This was true for high-income as well as low-
income children.

Of course, just finding brain differences doesn’t tell us


much. By definition, something about the brains of the
children must be different, since their behavior on the tests
varies so much. But finding this particular brain difference
at least suggests some answers.

The brain is the most complex system on the planet, and


brain development involves an equally complex web of
interactions between genes and the physical, social and
intellectual environment. We still have much to learn.

But we do know that the brain is, as neuroscientists say,


plastic. The process of evolution has designed brains to be
shaped by the outside world. That’s the whole point of
having one. Two complementary processes play an
especially important role in this shaping. In one process,
what neuroscientists call “proliferation,” the brain makes
many new connections between neurons. In the other
process, “pruning,” some existing connections get stronger,
while others disappear. Experience heavily influences both
proliferation and pruning.

Early in development, proliferation prevails. Young children


make many more new connections than adults do. Later in
development, pruning grows in importance. Humans shift
from a young brain that is flexible and good at learning, to
an older brain that is more effective and efficient, but more
rigid. A change in the thickness of the cortex seems to
reflect this developmental shift. While in childhood the
cortex gradually thickens, in adolescence this process is
reversed and the cortex gets thinner, probably because of
pruning.

We don’t know whether the low-income 14-year-olds in this


study failed to grow thicker brains as children, or whether
they shifted to thinner brains more quickly in adolescence.

There are also many differences in the experiences of low-


income and high-income children, aside from income itself
—differences in nutrition, stress, learning opportunities,
family structure and many more. We don’t know which of
these differences led to the differences in cortical
thickness.

But we can find some hints from animal studies. Rats


raised in enriched environments, with lots of things to
explore and opportunities to learn, develop more neural
connections. Rats subjected to stress develop fewer
connections. Some evidence exists that stress also makes
animals grow up too quickly, even physically, with generally
bad effects. And nutrition influences brain development in
all animals.

The important point, and the good news, is that brain


plasticity never ends. Brains can be changed throughout
life, and we never entirely lose the ability to learn and
change. But, equally importantly, childhood is the time of
the greatest opportunity, and the greatest risk. We lose the
potential of millions of young American brains every day.

THE MYSTERY OF LOYALTY, IN LIFE AND ON 'THE


AMERICANS'

A few weeks ago, I gave a talk at the American


Philosophical Association, in an excellent, serious
symposium on human experience and rationality. The only
problem was that my appearance meant missing the
brilliant, addictive TV series “The Americans.” Thank
heavens for on-demand cable—otherwise the temptation to
skip the conference and catch up on the latest betrayals
and deceptions might have been too much for me.

Still, one practical benefit of a philosophical education is


that it helps you to construct elaborate justifications for
your favorite vices. So I’d argue that “The Americans”
tackles the same philosophical questions I wrestled with in
my talk. Loyalty—our specific, personal commitment to a
particular child, lover or country—is one of the deepest
human emotions. But is it compatible with reason? And is it
morally profound or morally problematic?

The TV show, on FX, is about an apparently ordinary


suburban married couple in the 1980s, devoted to each
other and their children. But, unknown even to their
offspring, they are actually high-level Soviet spies, sent to
the U.S. secretly 20 years before. Between making lunches
and picking up the children from school, they put on weird
disguises, seduce strangers and commit elaborate
murders, all out of loyalty to the Soviet empire—and in
doing so, they suffer agonies of conflict.

Where does loyalty come from? Many mammals have


developed special brain systems, involving the
neurochemical oxytocin, to regulate what psychologists
call “attachment”—what the rest of us simply call love.

Attachment is a profound but strange emotion. We can be


loyal to a particular child, lover or country, regardless of
what they are actually like—just because they’re ours. Most
social relationships are reciprocal, with benefits for both
parties. But we can and should be loyal, even if we gain
nothing in return.

The attachment system originally evolved to get


mammalian mothers to protect their helpless babies.
Mother love really is as fundamental to evolution as it is to
storytelling. Still, is that profound biological impulse to
protect this one special child enough to justify deceiving
her or committing violence against others? If, as in the
recently concluded Season 3 of “The Americans,” a mother
deeply loves her daughter (who has no idea that her mom
is a Soviet spy), should the mother try to make sure her
child never finds out? Or should she teach her daughter
how she, too, might wear ridiculous wigs effectively and
assassinate enemies?

Evolution has also co-opted the attachment system to


produce other kinds of love. In a few species, like prairie
voles, the attachment system extends to fathers. The male
voles also care for their babies, and they “pair-bond” with
their mates. Systematic experiments show that oxytocin
plays an important role in this kind of love, too.

Lust is as ubiquitous in biology as it is on the FX channel.


But only a few mammals, including both prairie voles and
Philip and Elizabeth Jennings—the quintessentially human
couple in “The Americans”—combine lust with love and
loyalty.

How do you maintain a loyal pair bond, given the conflicting


demands of loyalty to a career and the possibility of other
loves? Particularly when, as in Season 3, that career
requires you to systematically seduce FBI secretaries and
the teenage daughters of State Department officials?

Mice are loyal to their pups and prairie voles to their mates.
But only human beings are also loyal to communities, to
countries, to ideologies. Humans have come to apply the
attachment system far more broadly than any other
creature; a whiff of oxytocin can even make us more likely
to trust strangers and cooperate with them. That broader
loyalty, combined with our human talent for abstraction, is
exactly what makes countries and ideologies possible.

So here’s the deepest philosophical question of “The


Americans,” one that doesn’t just arise in fiction. Can
loyalty to a country or an idea ever justify deception and
murder?

HOW 1-YEAR-OLDS FIGURE OUT THE WORLD

Watch a 1-year-old baby carefully for a while, and count


how many experiments you see. When Georgiana, my 17-
month-old granddaughter, came to visit last weekend, she
spent a good 15 minutes exploring the Easter decorations
—highly puzzling, even paradoxical, speckled Styrofoam
eggs. Are they like chocolate eggs or hard-boiled eggs? Do
they bounce? Will they roll? Can you eat them?

Some of my colleagues and I have argued for 20 years that


even the youngest children learn about the world in much
the way that scientists do. They make up theories, analyze
statistics, try to explain unexpected events and even do
experiments. When I write for scholarly journals about this
“theory theory,” I talk about it very abstractly, in terms of
ideas from philosophy, computer science and evolutionary
biology.

But the truth is that, at least for me, personally, watching


Georgie is as convincing as any experiment or argument. I
turn to her granddad and exclaim “Did you see that? It’s
amazing! She’s destined to be an engineer!” with as much
pride and astonishment as any nonscientist grandma. (And
I find myself adding, “Can you imagine how cool it would be
if your job was to figure out what was going on in that little
head?” Of course, that is supposed to be my job—but like
everyone else in the information economy, it often feels like
all I ever actually do is answer e-mail.)
Still, the plural of anecdote is not data, and fond grandma
observations aren’t science. And while guessing what
babies think is easy and fun, proving it is really hard and
takes ingenious experimental techniques.

In an amazingly clever new paper in the journal Science,


Aimee Stahl and Lisa Feigenson at Johns Hopkins
University show systematically that 11-month-old babies,
like scientists, pay special attention when their predictions
are violated, learn especially well as a result, and even do
experiments to figure out just what happened.

They took off from some classic research showing that


babies will look at something longer when it is unexpected.
The babies in the new study either saw impossible events,
like the apparent passage of a ball through a solid brick
wall, or straightforward events, like the same ball simply
moving through an empty space. Then they heard the ball
make a squeaky noise. The babies were more likely to learn
that the ball made the noise when the ball had passed
through the wall than when it had behaved predictably.

In a second experiment, some babies again saw the


mysterious dissolving ball or the straightforward solid one.
Other babies saw the ball either rolling along a ledge or
rolling off the end of the ledge and apparently remaining
suspended in thin air. Then the experimenters simply gave
the babies the balls to play with.

The babies explored objects more when they behaved


unexpectedly. They also explored them differently
depending on just how they behaved unexpectedly. If the
ball had vanished through the wall, the babies banged the
ball against a surface; if it had hovered in thin air, they
dropped it. It was as if they were testing to see if the ball
really was solid, or really did defy gravity, much like Georgie
testing the fake eggs in the Easter basket.
In fact, these experiments suggest that babies may be even
better scientists than grown-ups often are. Adults suffer
from “confirmation bias”—we pay attention to the events
that fit what we already know and ignore things that might
shake up our preconceptions. Charles Darwin famously
kept a special list of all the facts that were at odds with his
theory, because he knew he’d otherwise be tempted to
ignore or forget them.

Babies, on the other hand, seem to have a positive hunger


for the unexpected. Like the ideal scientists proposed by
the philosopher of science Karl Popper, babies are always
on the lookout for a fact that falsifies their theories. If you
want to learn the mysteries of the universe, that great,
distinctively human project, keep your eye on those weird
eggs.

HOW CHILDREN DEVELOP THE IDEA OF FREE WILL

We believe deeply in our own free will. I decide to walk


through the doorway and I do, as simple as that. But from a
scientific point of view, free will is extremely puzzling. For
science, what we do results from the causal chain of
events in our brains and minds. Where does free will fit in?

But if free will doesn’t exist, why do we believe so strongly


that it does? Where does that belief come from? In a new
study in the journal Cognition, my colleagues and I tried to
find out by looking at what children think about free will.
When and how do our ideas about freedom develop?

Philosophers point out that there are different versions of


free will. A simple version holds that we exercise our free
will when we aren’t constrained by outside forces. If the
door were locked, I couldn’t walk through it, no matter how
determined I was. But since it’s open, I can choose to go
through or not. Saying that we act freely is just saying that
we can do what we want when we aren’t controlled by
outside forces. This poses no problem for science. This
version simply says that my actions usually stem from
events in my brain—not from the world outside it.

But we also think that we have free will in a stronger sense.


We aren’t just free from outside constraints; we can even
act against our own desires. I might want the cookie,
believe that the cookie is delicious, think that the cookie is
healthy. But at the last moment, as a pure act of will, I could
simply choose not to eat the cookie.

In fact, I can even freely choose to act perversely. In


Dostoyevsky’s “Crime and Punishment,” Raskolnikov
murders an old woman—a stupid, brutal, unnecessary
crime—just to show that he truly has free will. This idea of
pure autonomy is more difficult to reconcile with the
scientific view that our actions are always caused by the
events in our minds and brains.

Where does this Raskolnikovian idea of free will come


from? Does it point to a mysterious phenomenon that
defies science? Or do we construct the idea to explain
other aspects of our experience?

Along with Tamar Kushnir and Nadia Chernyak at Cornell


University and Henry Wellman at the University of Michigan,
my lab at the University of California, Berkeley, set out to
see what children age 4 and 6 think about free will. The
children had no difficulty understanding the first sense of
free will: They said that Johnny could walk through the
doorway, or not, if the door was open, but he couldn’t go
through a closed door.

But the 4-year-olds didn’t understand the second sense of


free will. They said that you couldn’t simply decide to
override your desires. If you wanted the cookie (and Mom
said it was OK), you would have to eat it. The 6-year-olds, in
contrast, like adults, said that you could simply decide
whether to eat the cookie or not, no matter what. When we
asked the 6-year-olds why people could act against their
desires, many of them talked about a kind of absolute
autonomy: “It’s her brain, and she can do what she wants”
or “Nobody can boss her around.”

In other studies, in the journal Cognitive Science, Drs.


Kushnir and Chernyak found that 4-year-olds also think that
people couldn’t choose to act immorally. Philosophers and
theologians, and most adults, think that to be truly moral,
we have to exercise our free will. We could do the wrong
thing, but we choose to do the right one. But the 4-year-olds
thought that you literally couldn’t act in a way that would
harm another child. They didn’t develop the adult concept
until even later, around 8.

We don’t simply look into our minds and detect a


mysterious free will. This research suggests, instead, that
children develop the idea of free will to explain the
complex, unpredictable interactions among all our
conflicting desires, goals and moral obligations.

HOW WE LEARN TO BE AFRAID OF THE RIGHT THINGS

We learn to be afraid. One of the oldest discoveries in


psychology is that rats will quickly learn to avoid a sound or
a smell that has been associated with a shock in the past—
they not only fear the shock, they become scared of the
smell, too.

A paper by Nim Tottenham of the University of California,


Los Angeles in “Current Topics in Behavioral
Neurosciences” summarizes recent research on how this
learned fear system develops, in animals and in people.
Early experiences help shape the fear system. If caregivers
protect us from danger early in life, this helps us to develop
a more flexible and functional fear system later. Dr.
Tottenham argues, in particular, that caring parents keep
young animals from prematurely developing the adult
system: They let rat pups be pups and children be children.

Of course, it makes sense to quickly learn to avoid events


that have led to danger in the past. But it can also be
paralyzing. There is a basic paradox about learning fear.
Because we avoid the things we fear, we can’t learn
anything more about them. We can’t learn that the smell no
longer leads to a shock unless we take the risk of exploring
the dangerous world.

Many mental illnesses, from general anxiety to phobias to


posttraumatic-stress syndrome, seem to have their roots in
the way we learn to be afraid. We can learn to be afraid so
easily and so rigidly that even things that we know aren’t
dangerous—the benign spider, the car backfire that sounds
like a gunshot—can leave us terrified. Anxious people end
up avoiding all the things that just might be scary, and that
leads to an increasingly narrow and restricted life and just
makes the fear worse. The best treatment is to let people
“unlearn” their fears—gradually exposing them to the scary
cause and showing them that it doesn’t actually lead to the
dangerous effect.

Neuroscientists have explored the biological basis for this


learned fear. It involves the coordination between two brain
areas. One is the amygdala, an area buried deep in the
brain that helps produce the basic emotion of fear, the
trembling and heart-pounding. The other is the prefrontal
cortex, which is involved in learning, control and planning.

Regina Sullivan and her colleagues at New York University


have looked at how rats develop these fear systems. Young
rats don’t learn to be fearful the way that older rats do, and
their amygdala and prefrontal systems take a while to
develop and coordinate. The baby rats “unlearn” fear more
easily than the adults, and they may even approach and
explore the smell that led to the shock, rather than avoid it.

If the baby rats are periodically separated from their


mothers, however, they develop the adult mode of fear and
the brain systems that go with it more quickly. This early
maturity comes at a cost. Baby rats who are separated
from their mothers have more difficulties later on,
difficulties that parallel human mental illness.

Dr. Tottenham and her colleagues found a similar pattern in


human children. They looked at children who had grown up
in orphanages in their first few years of life but then were
adopted by caring parents. When they looked at the
children’s brains with functional magnetic resonance
imaging, they found that, like the rats, these children
seemed to develop adultlike “fear circuits” more quickly.
Their parents were also more likely to report that the
children were anxious. The longer the children had stayed
in the orphanages, the more their fear system developed
abnormally, and the more anxious they were.

The research fits with a broader evolutionary picture. Why


does childhood exist at all? Why do people, and rats, put so
much effort into protecting helpless babies? The people
who care for children give them a protected space to figure
out just how to cope with the dangerous adult world. Care
gives us courage; love lets us learn.

LEARNING FROM KING LEAR: THE SAVING GRACE OF


LOW STATUS

Last week I saw the great Canadian actor Colm Feore


brilliantly play King Lear. In one of the most heart-
wrenching scenes, Lear, the arrogant, all-powerful king,
humiliated by his daughters, faces the fury of the storm on
the heath. Yet he thinks not about himself but the “poor
naked wretches” whose “houseless heads and unfed sides”
he has ignored before.

“Oh I have ta’en too little care of this!


Take physic, pomp.
Expose thyself to feel what wretches feel,
That thou mayst shake the superflux to them
And show the heavens more just.”

As usual Shakespeare’s psychology was uncannily


accurate. You might think that losing status would make us
more selfish. But, in fact, a new study in the Proceedings of
the National Academy of Sciences by Ana Guinote at
University College London and colleagues shows just the
opposite. When people feel that they are more powerful,
they are less likely to help others; when they “feel what
wretches feel,” they become more altruistic.

In the new paper, the researchers artificially manipulated


how people felt about their place in the pecking order. They
randomly told some students that their department was
one of the best in the university and told others that it was
one of the worst. At the very end of the session, allegedly
after the experiment was over, the experimenter
“accidentally” dropped a box full of pens on the floor. The
researchers recorded how many pens the students picked
up and handed back. The students whose departmental
pride had been squashed picked up more pens than the top
dogs.

In another study included in the paper, the experimenters


manipulated status temporarily in a similar way and asked
the students about their values and life goals. The low-
status students talked about altruistic goals, such as
helping others, more than the high-status students did.
In a third study, the experimenters randomly told the
students that they belonged to a group who had scored
high or low on an arbitrary test. Then the students had a
discussion with a group of other students about how to
pick an apartment and a roommate. Independent coders
analyzed their behavior (the coders didn’t know about the
status manipulation). The “low-status” students displayed
more signals of niceness and cooperation: They smiled
more and acted more warmly and empathically toward the
other person. The “high-status” students were more
focused on displaying their own competence and
knowledge.

The researchers even got similar results with children age 4


and 5. They were asked to share stickers with another
child. The more dominant the children were, the less they
shared.

Why would this happen? Dr. Guinote and her colleagues


suggest that, in a hierarchical group, low-status people
have to use other strategies to accomplish their goals.
Mutual helpfulness, cooperation and altruism are ways to
counteract the simpler and more obvious advantages of
wealth and rank.

You can even see this in chimpanzees. If several lower-


ranked chimps get together, they can overturn even the
most impressive alpha male.

What’s interesting is that nothing about the students made


them intrinsically nice or mean—the only difference was
their brief, immediate experience of high or low status. But
these results may help to explain the results of other
studies showing that people with more wealth or rank or
power are less helpful, in general. When people are
perpetually reminded of their high status, they seem to
become less concerned about others.
The PNAS study and “King Lear” together suggest a
modern way to “physic pomp.” The King, at least in
Shakespeare, has his Fool to remind him of his true status.
Perhaps we should have a “fool app” that periodically pings
to remind us that we are all just “poor, bare, forked animals”
after all.

THE SMARTEST QUESTIONS TO ASK ABOUT


INTELLIGENCE

Scientists have largely given up the idea of “innate talent,”


as I said in my last column. This change might seem
implausible and startling. We all know that some people
are better than others at doing some things. And we all
know that genes play a big role in shaping our brains. So
why shouldn’t genes determine those differences?

Biologists talk about the relationship between a “genotype,”


the information in your DNA, and a “phenotype,” the
characteristics of an adult organism. These relationships
turn out to be so complicated that parceling them out into
percentages of nature and nurture is impossible. And, most
significantly, these complicated relationships can change
as environments change.

For example, Michael Meaney at McGill University has


discovered “epigenetic” effects that allow nurture to
reshape nature. Caregiving can turn genes on and off and
rewire brain areas. In a 2000 study published in Nature
Neuroscience he and colleagues found that some rats were
consistently better at solving mazes than others. Was this
because of innate maze-solving genes? These smart rats, it
turned out, also had more attentive mothers. The
researchers then “cross-fostered” the rat pups: They took
the babies of inattentive mothers, who would usually not be
so good at maze-solving, and gave them to the attentive
mothers to raise, and vice versa. If the baby rats’ talent was
innate, this should make no difference. If it wasn’t, it should
make all the difference.

In fact, the inattentive moms’ babies who were raised by


the attentive moms got smart, but the opposite pattern
didn’t hold. The attentive moms’ babies stayed relatively
smart even when they were raised by the inattentive moms.
So genetics prevailed in the poor environment, but
environment prevailed in the rich one. So was maze-solving
innate or not? It turns out that it’s not the right question.

To study human genetics, researchers can compare


identical and fraternal twins. Early twin studies found that
IQ was “heritable”—identical twins were more similar than
fraternal ones. But these studies looked at well-off children.
Eric Turkheimer at the University of Virginialooked at twins
in poor families and found that IQ was much less
“heritable.” In the poor environment, small differences in
opportunity swamped any genetic differences. When
everyone had the same opportunities, the genetic
differences had more effect. So is IQ innate or not? Again,
the wrong question.

If you only studied rats this might be just academic. After


all, rats usually are raised by their biological mothers. But
the most important innate feature of human beings is our
ability to transform our physical and social environments.
Alone among animals, we can envision an unprecedented
environment that might help us thrive, and make that
environment a reality. That means we simply don’t know
what the relationship between genes and environment will
look like in the future.

Take IQ again. James Flynn, at New Zealand’s University of


Otago, and others have shown that absolute IQ scores have
been steadily and dramatically increasing, by as much as
three points a decade. (The test designers have to keep
making the questions harder to keep the average at 100).

The best explanation is that we have consciously


transformed our society into a world where schools are
ubiquitous. So even though genes contribute to whatever
IQ scores measure, IQ can change radically as a result of
changes in environment. Abstract thinking and a thirst for
knowledge might once have been a genetic quirk. In a
world of schools, they become the human inheritance.

Thinking in terms of “innate talent” often leads to a kind of


fatalism: Because right now fewer girls than boys do well at
math, the assumption is that this will always be the case.
But the actual science of genes and environment says just
the opposite. If we want more talented children, we can
change the world to create them.

THE DANGERS OF BELIEVING THAT TALENT IS INNATE

In 2011, women made up half the professors of molecular


biology and neuroscience in the U.S. but less than a third of
the philosophy professors. How come? Is it because men
in philosophy are biased against women or because
women choose not to go into philosophy? But why would
philosophers be more biased than molecular biologists?
And why would women avoid philosophy and embrace
neuroscience?

A new paper in the journal Science suggests an interesting


answer. Sarah-Jane Leslie, a philosopher at Princeton
University, Andrei Cimpian, a psychologist at the University
of Illinois, and colleagues studied more than 1,800
professors and students in 30 academic fields. The
researchers asked the academics how much they thought
success in their field was the result of innate, raw talent.
They also asked how hard people in each field worked, and
they recorded the GRE scores of graduate students.
Professors of philosophy, music, economics and math
thought that “innate talent” was more important than did
their peers in molecular biology, neuroscience and
psychology. And they found this relationship: The more that
people in a field believed success was due to intrinsic
ability, the fewer women and African-Americans made it in
that field.

Did the fields with more men require more intelligence


overall? No, the GRE scores weren’t different, and it seems
unlikely that philosophers are smarter than biologists or
neuroscientists. Did the fields with more men require more
work? That didn’t make a difference either.

Was it because those fields really did require some special


innate genius that just happened to be in short supply in
women and African-Americans? From a scientific
perspective, the very idea that something as complicated
as philosophical success is the result of “innate talent”
makes no sense. For that matter, it also makes no sense to
say that it’s exclusively the result of culture.

From the moment a sperm fertilizes an egg, there is a


complex cascade of interactions between genetic
information and the environment, and once a baby is born,
the interactions only become more complex. To ask how
much innate talent you need to succeed at philosophy is
like asking how much fire, earth, air and water you need to
make gold. That medieval theory of elements just doesn’t
make sense any more, and neither does the nature/nurture
distinction.

But although scientists have abandoned the idea of innate


talent, it’s still a tremendously seductive idea in everyday
life, and it influences what people do. Psychologist Carol
Dweck at Stanford has shown in many studies,
summarized in her book “Mindset,” that believing in innate
academic talent has consequences, almost all bad. Women
and minorities, who are generally less confident to begin
with, tend to doubt whether they have that mythical magic
brilliance, and that can discourage them from trying fields
like math or philosophy. But the idea is even bad for the
confident boy-genius types.

In Dr. Dweck’s studies, students who think they are innately


smart are less likely to accept and learn from mistakes and
criticism. If you think success is the result of hard work,
then failure will inspire you to do more. If failure is an
existential threat to your very identity, you will try to deny it.

But these studies say something else. Why are there so few
women in philosophy? It isn’t really because men are
determined to keep them out or because women freely
choose to go elsewhere. Instead, as science teaches us
again and again, our actions are shaped by much more
complicated and largely unconscious beliefs. I’m a woman
who moved from philosophy to psychology, though I still do
both. The new study may explain why—better than all the
ingenious reasons I’ve invented over the years.

The good news, though, is that such beliefs can change. Dr.
Dweck found that giving students a tutorial emphasizing
that our brains change with effort and experience helped to
shift their ideas. Maybe that would be a good exercise for
the philosophers, too.

WHAT A CHILD CAN TEACH A SMART COMPUTER

Every January the intellectual impresario and literary agent


John Brockman (who represents me, I should disclose)
asks a large group of thinkers a single question on his
website, edge.org. This year it is: “What do you think about
machines that think?” There are lots of interesting answers,
ranging from the skeptical to the apocalyptic.
I’m not sure that asking whether machines can think is the
right question, though. As someone once said, it’s like
asking whether submarines can swim. But we can ask
whether machines can learn, and especially, whether they
can learn as well as 3-year-olds.

Everyone knows that Alan Turing helped to invent the very


idea of computation. Almost no one remembers that he
also thought that the key to intelligence would be to design
a machine that was like a child, not an adult. He pointed
out, presciently, that the real secret to human intelligence is
our ability to learn.

The history of artificial intelligence is fascinating because it


has been so hard to predict what would be easy or hard for
a computer. At first, we thought that things like playing
chess or proving theorems—the bullfights of nerd
machismo—would be hardest. But they turn out to be much
easier than recognizing a picture of a cat or picking up a
cup. And it’s actually easier to simulate a grandmaster’s
gambit than to mimic the ordinary learning of every baby.

Recently, machine learning has helped computers to do


things that were impossible before, like labeling Internet
images accurately. Techniques like “deep learning” work by
detecting complicated and subtle statistical patterns in a
set of data.

But this success isn’t due to the fact that computers have
suddenly developed new powers. The big advance is that,
thanks to the Internet, they can apply these statistical
techniques to enormous amounts of data—data that were
predigested by human brains.

Computers can recognize Internet images only because


millions of real people have sorted out the unbelievably
complex information received by their retinas and labeled
the images they post online—like, say, Instagrams of their
cute kitty. The dystopian nightmare of “The Matrix” is now
a simple fact: We’re all serving Google ’s computers, under
the anesthetizing illusion that we’re just having fun with
LOLcats.

The trouble with this sort of purely statistical machine


learning is that you can only generalize from it in a limited
way, whether you’re a baby or a computer or a scientist. A
more powerful way to learn is to formulate hypotheses
about what the world is like and to test them against the
data. One of the other big advances in machine learning
has been to automate this kind of hypothesis-testing.
Machines have become able to formulate hypotheses and
test them against data extremely well, with consequences
for everything from medical diagnoses to meteorology.

The really hard problem is deciding which hypotheses, out


of all the infinite possibilities, are worth testing.
Preschoolers are remarkably good at creating brand new,
out-of-the-box creative concepts and hypotheses in a way
that computers can’t even begin to match.

Preschoolers are also remarkably good at creating chaos


and mess, as all parents know, and that may actually play a
role in their creativity. Turing presciently argued that it
might be good if his child computer acted randomly, at
least some of the time. The thought processes of three-
year-olds often seem random, even crazy. But children have
an uncanny ability to zero in on the right sort of weird
hypothesis—in fact, they can be substantially better at this
than grown-ups. We have almost no idea how this sort of
constrained creativity is possible.

There are, indeed, amazing thinking machines out there,


and they will unquestionably far surpass our puny minds
and eventually take over the world. We call them our
children.

WHY DIGITAL-MOVIE EFFECTS STILL CAN'T DO A HUMAN


FACE

Last month in New Zealand I visited some of the wizardly


artists and engineers working for Weta Digital, the effects
company behind series like “The Hobbit” and “Planet of the
Apes.” That set me thinking about how we decide whether
something has a mind.

You might not imagine that your local multiplex would be a


font of philosophical insight. But the combination of Alan
Turing ’s “imitation game” (the idea behind the new movie’s
title) and the latest digital effects blockbusters speaks to
deep questions about what it means to be human.

Turing, who helped to invent the modern computer, also


proposed that the best test of whether a computer could
think was the imitation game. Someone would sit at a
keyboard interacting with either a computer or another
human at the other end. If the person at the keyboard
couldn’t tell the difference between the two, then you would
have to accept that the computer had achieved human
intelligence and had something like a human mind.

But suppose that, instead of sitting at a keyboard, you were


looking at a face on a screen? That imitation-game test, it
turns out, is much, much harder for a computer to pass.

Nowadays computers can generate convincing images of


almost everything—from water on fur to wind in the grass.
Except a human face. There they stumble on the problem
of the “uncanny valley”—the fact that there is nothing
creepier than something that is almost human. Human
beings are exquisitely attuned to the subtleties of emotion
that we see in the faces and movements of others.
As a result, when filmmakers do manage to generate digital
faces, they have to do it by using humans themselves.
Characters like the Na’vi in “Avatar” or Caesar in “Planet of
the Apes” heavily depend on “motion capture.” The
filmmakers take a real actor’s performance and record the
movements the actor makes in detail. Then they reproduce
those movements on the face of a character.

More-traditional computer animation also relies on real


people. Animators are really actors of a kind, and the great
animated films from “Snow White” to” Toy Story” have
relied on their skills. Pixar (which my husband co-founded)
hires animators based more on their understanding of
acting than on their drawing. A great actor imagines how a
character’s face and body would move, and then makes his
own face and body move the same way. Great animators
do the same thing, but they use a digital interface to
produce the movements instead of their own eyes and
arms. That’s how they can turn a teapot or a Luxo lamp into
a convincing character.

Our ability to detect emotion is as wide-ranging as it is


sensitive. We notice the slightest difference from a human
face, but we can also attribute complex emotions to
teapots and lamps if they move just the right way.

Mark Sagar, who helped to design the faces in “Avatar”


before becoming a professor at the University of Auckland,
has created one of the most convincing completely
computer-generated faces: an infant that he calls Baby X.
To do this he has to use a model that includes precise
information about the hundreds of muscles in our faces
and their relation to our facial expressions. He is trying to
integrate the latest neuroscience research about the
human brain into his model. For a computer to simulate a
human face, it has to simulate a human brain and body,
too.

It made sense to think that the ability to reason and speak


was at the heart of the human mind. Turing’s bet was that a
computer that could carry on a conversation would be
convincingly human. But the real imitation game of digital-
effects movies suggests that the ability to communicate
your emotions may be even more important. The ineffable,
subtle, unconscious movements that tell others what we
think and feel are what matter most. At least that’s what
matters most to other human beings.

2014

DNA AND THE RANDOMNESS OF GENETIC PROBLEMS

Last Thanksgiving, I wrote about my profound gratitude for


the natural, everyday, biological miracle of my newborn
granddaughter. Georgiana just celebrated her first birthday,
and this year I have new, and rather different, miracles to be
thankful for.

The miraculously intricate process that transforms a few


strands of DNA into a living creature is the product of blind
biological forces. It can go wrong.

Georgie is a bright, pretty, exceptionally sweet baby. But a


small random mutation in one of her genes means that she
has a rare condition called Congenital Melanocytic Nevus,
or CMN.

The most notable symptom of CMN is a giant mole, or


“nevus,” along with many smaller moles. Georgie’s nevus
covers much of her back. But CMN can also affect a child’s
brain. And though everything about CMN is uncertain, it
leads to something like a 2% to 5% risk for potentially fatal
skin cancers. (The thought of a baby with melanoma
should be confirmation enough that the universe is
indifferent to human concerns.)

We are lucky so far. Georgie’s brain is just fine, and the


nevus is in a relatively inconspicuous place. But somehow
it doesn’t seem right to say that we are thankful for one set
of blind chances when other babies are condemned by
another set.

But here’s a miracle I am thankful for. Those blind, arbitrary,


natural processes have created human beings who are not
blind or arbitrary. Those human beings really do have
reasons and designs, and, by their very nature, they can
defy nature itself—including a condition like CMN.

Humans created science—we have the ability to


understand the natural world and so to transform it. It is
hard to get funding for research on rare diseases like CMN.
But several scientists across the globe are still trying to
figure out what’s wrong and how to fix it.

For example, in London, Veronica Kinsler at Great Ormond


Street Hospital has taken advantage of the amazing
scientific advances in genomics to pinpoint precisely which
mutations cause CMN. Now that scientists know just
which gene changes cause CMN, they have started to
design ways to fix the damage. And those discoveries
might help us to understand and treat skin cancer, too.

Human beings can not only transform nature, we can


transform ourselves. Even a few years ago, someone with
CMN could feel isolated and stigmatized. Thanks to the
Internet, children with CMN, and the people who love them,
have been able to find each other and form support groups:
Nevus Outreach in the U.S. and Caring Matters Now in the
U.K. Together, they have been able to raise money for
research.
The support groups have done something else, too. It may
be human nature to feel disturbed, at first, at the sight of a
baby with a giant mole. But human beings can change the
way that they think and feel, and there’s been a quiet social
revolution in our feelings about disability and difference.
Rick Guidotti, a successful fashion photographer, has taken
on the mission of photographing individuals with genetic
differences at Positiveexposure.org, including lovely
photographs of people with CMN.

Of course, all grandmothers think that their grandchildren


are uniquely beautiful, no matter what. So do Georgie’s
parents and grandfathers and uncles and aunts and all the
people who care for her. What could be more precious than
Georgie’s wispy blond hair, the cleft in her chin, her
charming laugh, her thoughtful look?

This might seem like an evolutionary illusion, just a way for


our genes to get us to spread them. But I think instead that
these perceptions of love point to a more profound truth.
Ultimately, each of the six billion of us on the planet, each
with our unique genetic makeup and irreducibly individual
history, really is just that irreplaceable and valuable—and
vulnerable, too.

This Thanksgiving I’m full of the deepest gratitude to the


scientists and the support groups—but I’m most grateful of
all to little Georgiana, who reminds me of that profound
truth every day.

HOW CHILDREN GET THE CHRISTMAS SPIRIT

As we wade through the towers of presents and the


mountains of torn wrapping paper, and watch the children’s
shining, joyful faces and occasional meltdowns, we may
find ourselves speculating—in a detached, philosophical
way—about generosity and greed. That’s how I cope,
anyway.

Are we born generous and then learn to be greedy? Or is it


the other way round? Do immediate intuitive impulses or
considered reflective thought lead to generosity? And how
could we possibly tell?

Recent psychological research has weighed in on the


intuitive-impulses side. People seem to respond quickly
and perhaps even innately to the good and bad behavior of
others. Researchers like Kiley Hamlin at the University of
British Columbia have shown that even babies prefer
helpful people to harmful ones. And psychologists like
Jonathan Haidt at New York University’s Stern School of
Business have argued that even adult moral judgments are
based on our immediate emotional reactions—reflection
just provides the after-the-fact rationalizations.

But some new studies suggest it’s more complicated.


Jason Cowell and Jean Decety at the University of Chicago
explored this question in the journal Current Biology. They
used electroencephalography, or EEG, to monitor electrical
activity in children’s brains. Their study had two parts. In
the first part, the researchers recorded the brain waves of
3-to-5-year-olds as they watched cartoons of one character
either helping or hurting another.

The children’s brains reacted differently to the good and


bad scenarios. But they did so in two different ways. One
brain response, the EPN, was quick, another, the LPP, was in
more frontal parts of the brain and was slower. In adults,
the EPN is related to automatic, instinctive reactions while
the LPP is connected to more purposeful, controlled and
reflective thought.

In the second part of the study, the experimenters gave the


children a pile of 10 stickers and told them they could keep
them all themselves or could give some of them to an
anonymous child who would visit the lab later in the day.
Some children were more generous than others. Then the
researchers checked to see which patterns of brain activity
predicted the children’s generosity.

They found that the EPN—the quick, automatic, intuitive


reaction—didn’t predict how generous the children were
later on. But the slow, thoughtful LPP brain wave did.
Children who showed more of the thoughtful brain activity
when they saw the morally relevant cartoons also were
more likely to share later on.

Of course, brain patterns are complicated and hard to


interpret. But this study at least suggests an interesting
possibility. There are indeed quick and automatic
responses to help and to harm, and those responses may
play a role in our moral emotions. But more reflective,
complex and thoughtful responses may play an even more
important role in our actions, especially actions like
deciding to share with a stranger.

Perhaps this perspective can help to resolve some of the


Christmas-time contradictions, too. We might wish that the
Christmas spirit would descend on us and our children as
simply and swiftly as the falling snow. But perhaps it’s the
very complexity of the season, that very human tangle of
wanting and giving, joy and elegy, warmth and tension, that
makes Christmas so powerful, and that leads even children
to reflection, however gently. Scrooge tells us about both
greed and generosity, Santa’s lists reflect both justice and
mercy, the Magi and the manger represent both abundance
and poverty.

And, somehow, at least in memory, Christmas generosity


always outweighs the greed, the joys outlive the
disappointments. Even an unbeliever like me who still
deeply loves Christmas can join in the spirit of Scrooge’s
nephew Fred, “Though it has never put a scrap of gold or
silver in my pocket [or, I would add, an entirely
uncomplicated intuition of happiness in my brain], I believe
that Christmas has done me good, and will do me good,
and, I say, God bless it!”

WHO WINS WHEN SMART CROWS AND KIDS MATCH


WITS?

I am writing this looking out my window at the peaks and


lakes of Middle-Earth, also known as New Zealand’s South
Island. In the “Hobbit” movies, this sublime landscape
played the role of the Misty Mountains.

But, of course, there aren’t really any supernaturally


intelligent creatures around here—no noble Gwaihirs or
sinister Crebain (my fellow Tolkien geeks will understand).
Or are there?

In fact, I’m in New Zealand because of just such a creature,


the New Caledonian crow. It lives on one obscure island
1,500 miles north of here. Gavin Hunt, Russell Gray and
Alex Taylor at the University of Auckland (among others)
have been studying these truly remarkable birds since Dr.
Hunt discovered them 20 years ago.

New Caledonian crows make more sophisticated tools


than any other animal except us. In the wild, they fashion
sticks into hooked tools that they use to dig out delicious
grubs from holes in trees. They turn palm leaves into
digging tools, too. The process involves several steps, like
stripping away leaves to create barbs for hooking grubs
and nibbling the end of the stem to a sharp point.

Of course, many animals instinctively perform complex


actions. But the crows are so impressive because they can
actually learn how to create tools. In the lab, they can bend
wires to make hooks, and they can use one new tool to get
another. Like many very smart animals, including us, crow
babies are immature for an exceptionally long time and
they use their protected early life to learn their toolmaking
skills.

Are New Caledonian crows as smart as a human child? It’s


tempting to think about intelligence as an ineffable magical
substance, like lembas or mithril in Tolkien’s novels, and to
assume that we can measure just how much of it an
individual person or an individual species possesses.

But that makes no sense from an evolutionary point of


view. Every species is adapted to solve the particular
problems of its own environment. So researchers have
been trying to see how the crows’ adaptations compare to
ours.

In a paper this year in the Proceedings of the Royal Society,


I joined forces with Drs. Taylor and Gray and other
colleagues, especially Anna Waismeyer at the University of
Washington, to find out.

We gave the crows and 2-year-old children a problem to


solve. For the crows, we balanced a sort of domino with
meat on top in a hole in a machine. When they went to eat
the meat, they accidentally tipped the domino over into the
machine. Unexpectedly, that made the machine dispense
even more meat from a separate opening. (We used
marbles instead of meat for the children.)

Here was the question: Would the crows or the children be


able to use this accidental event to design a new action?
Could they imagine how to procure the treat on purpose,
even though they had never done it before? If they just saw
the block sitting on the table, would they pick it up and put
it in the machine?

Despite their tool-using brilliance, the crows never got it—


even after a hundred trials. They could learn to pick up the
block and put it into the box through conditioning—that is, if
they were rewarded for each step of the process—but not
spontaneously. In contrast, most of the 2-year-olds learned
from the accident. They could imagine how to get the
marble, and could immediately pick up the block and put it
in the box.

The crows have a deep understanding of how physical


objects work, and they are very clever at using that
knowledge to accomplish their goals. But the children
seem to be better at noticing the novel, the accidental and
the serendipitous and at using that experience to imagine
new opportunities. It may be that same ability which lets us
humans fashion ancient mountains and lakes into magical
new worlds.

HOW HUMANS LEARN TO COMMUNICATE WITH THEIR


EYES

The eyes are windows to the soul. What could be more


obvious? I look through my eyes onto the world, and I look
through the eyes of others into their minds.

We immediately see the tenderness and passion in a loving


gaze, the fear and malice in a hostile glance. In a lecture
room, with hundreds of students, I can pick out exactly who
is, and isn’t, paying attention. And, of course, there is the
electricity of meeting a stranger’s glance across a crowded
room.

But wait a minute, eyes aren’t windows at all. They’re inch-


long white and black and colored balls of jelly set in holes
at the top of a skull. How could those glistening little
marbles possibly tell me about love or fear or attention?

A new study in the Proceedings of the National Academy of


Science by Sarah Jessen of the Max Planck Institute and
Tobias Grossmann of the University of Virginia, suggests
that our understanding of eyes runs very deep and emerges
very early.

Human eyes have much larger white areas than the eyes of
other animals and so are easier to track. When most
people, including tiny babies, look at a face, they
concentrate on the eyes. People with autism, who have
trouble understanding other minds, often don’t pay
attention to eyes in the same way, and they have trouble
meeting or following another person’s gaze. All this
suggests that we may be especially adapted to figure out
what our fellow humans see and feel from their eyes.

If that’s true, even very young babies might detect emotions


from eyes, and especially eye whites. The researchers
showed 7-month-old babies schematic pictures of eyes.
The eyes could be fearful or neutral; the clue to the emotion
was the relative position of the eye-whites. (Look in the
mirror and raise your eyelids until the white area on top of
the iris is visible—then register the look of startled fear on
your doppelgänger in the reflection.)

The fearful eyes could look directly at the baby or look off
to one side. As a comparison, the researchers also gave
the babies exactly the same images to look at but with the
colors reversed, so that the whites were black.

They showed the babies the images for only 50


milliseconds, too briefly even to see them consciously.
They used a technique called Event-Related Brain
Potentials, or ERP, to analyze the babies’ brain-waves.
The babies’ brain-waves were different when they looked at
the fearful eyes and the neutral ones, and when they saw
the eyes look right at them or off to one side. The
differences were particularly clear in the frontal parts of the
brain. Those brain areas control attention and are
connected to the brain areas that detect fear.

When the researchers showed the babies the reversed


images, their brains didn’t differentiate between them. So
they weren’t just responding to the visual complexity of the
images—they seemed to recognize that there was
something special about the eye-whites.

So perhaps the eyes are windows to the soul. After all, I


think that I just look out and directly see the table in front of
me. But, in fact, my brain is making incredibly complex
calculations that accurately reconstruct the shape of the
table from the patterns of light that enter my eyeballs. My
baby granddaughter Georgiana’s brain, nestled in the
downy head on my lap, does the same thing.

The new research suggests that my brain also makes my


eyes move in subtle ways that send out complex signals
about what I feel and see. And, as she gazes up at my face,
Georgie’s brain interprets those signals and reconstructs
the feelings that caused them. She really does see the soul
behind my eyes, as clearly as she sees the table in front of
them.

A MORE SUPPORTIVE WORLD CAN WORK WONDERS FOR


THE AGED

This was a week of worry for my family. We were worrying


about my 93-year-old mother-in-law—a lovely, bright, kind
woman in the cruel grip of arthritis, Alzheimer’s and just
plain old age. Of course, this is a commonplace, even banal
story for my baby boomer generation, though no less
painful for that. And it’s got an extra edge because we
aren’t just worried about what will happen to our parents;
we’re worried about what will happen to us, not just my
husband and me, but our entire aging nation.

Getting old, with its unavoidable biological changes and its


inevitable end, might simply seem like an inescapably
tragic part of the human condition. (Whenever my
grandmother began to complain about getting old, she
would add wryly, “But consider the alternative . . .”)

But it’s hard to avoid feeling that there is something deeply


and particularly wrong about the way we treat old age in
American culture right now. Instead of seeing honor and
dignity, sagacity and wisdom in the old, we see only
pathology and pathos.

Could these cultural attitudes actually make the biological


facts of aging worse?

A striking new study in the journal Psychological Science


by Becca Levy at the Yale School of Public Health and
colleagues suggests that cultural attitudes and physical
well-being are far more closely intertwined than we might
have thought. Exposing people to positive messages about
aging, even through completely unconscious messages,
actually makes them function better physically. In fact, the
unconscious positive messages are more effective than
conscious positive thinking.

The researchers looked at 100 older people aged 60 to 99.


Every week for four weeks, they asked the participants to
detect flashing lights on a screen and to write an essay.
Words appeared on the flashing screen for a split second—
so briefly that the participants didn’t know they were there.
One group saw positive words about aging, such as “spry”
and “wise,” then wrote an essay on a topic that had nothing
to do with aging. Another group saw random letter
combinations and then wrote an essay that described a
thriving senior. A third group got both of the positive
experiences, and a fourth got both neutral experiences.

Three weeks later, the researchers measured the elderly


participants’ image of older people, in general, and their
own self-image. They also gave them a standard medical
test. It measured how well the seniors functioned
physically—things like how easy it was to get up and down
from a chair and how long it took to walk a few feet.

The people who had seen the positive messages—three


weeks earlier, for only a split second, without knowing it—
felt better about aging than those who hadn’t. Even more
surprisingly, they also did much better on the physical
tests. In fact, the subliminal messages gave them as great
an advantage as seniors in another study who exercised for
six months.

Consciously writing a positive essay didn’t have the same


effect. This may be because consciously trying to think
positively could be undermined by equally conscious
skepticism (thriving seniors—yeah, right).

So in the lab, unconscious positive messages about aging


can help. But in real life, of course, we’re surrounded by
unconscious negative messages about aging, and they’re
much more difficult to combat than explicit prejudice.

I’m personally optimistic anyway. At age 59, I’m on the tail


end of the baby boom, and I’m relying on the older
members of my generation to blaze the trail for me once
more. After all, they persuaded our parents it was OK for us
to have sex when we were 20, and then even persuaded our
children that it was OK for us to have sex when we were 50.
Even the boomers can’t stop the inevitable march to the
grave. But perhaps we can walk a little taller on the way
there.

WHAT SENDS TEENS TOWARD TRIUMPH OR


TRIBULATION

Laurence Steinberg calls his authoritative new book on the


teenage mind “Age of Opportunity.” Most parents think of
adolescence, instead, as an age of crisis. In fact, the same
distinctive teenage traits can lead to either triumph or
disaster.

On the crisis side, Dr. Steinberg outlines the grim statistics.


Even though teenagers are close to the peak of strength
and health, they are more likely to die in accidents, suicides
and homicides than younger or older people. And
teenagers are dangerous to others as well. Study after
study shows that criminal and antisocial behavior rises
precipitously in adolescence and then falls again.

Why? What happens to transform a sane, sober, balanced


8-year-old into a whirlwind of destruction in just a few
years? And why do even smart, thoughtful, good children
get into trouble?

It isn’t because teenagers are dumb or ignorant. Studies


show that they understand risks and predict the future as
well as adults do. Dr. Steinberg wryly describes a public
service campaign that tried to deter unprotected sex by
explaining that children born to teenage parents are less
likely to go to college. The risk to a potential child’s
educational future is not very likely to slow down two
teenagers making out on the couch.

Nor is it just that teenagers are impulsive; the ability for


self-control steadily develops in the teen years, and
adolescents are better at self-control than younger
children. So why are they so much more likely to act
destructively?

Dr. Steinberg and other researchers suggest that the crucial


change involves sensation-seeking. Teenagers are much
more likely than either children or adults to seek out new
experiences, rewards and excitements, especially social
experiences.

Some recent studies by Kathryn Harden at the University of


Texas at Austin and her colleagues in the journal
Developmental Science support this idea. They analyzed a
very large study that asked thousands of adolescents the
same questions over the years, as they grew up. Some
questions measured impulsiveness (“I have to use a lot of
self-control to stay out of trouble”), some sensation-
seeking (“I enjoy new and exciting experiences even if they
are a little frightening or unusual . . .”) and some
delinquency (“I took something from a store without paying
for it”).

Impulsivity and sensation-seeking were not closely related


to one another. Self-control steadily increased from
childhood to adulthood, while sensation-seeking went up
sharply and then began to decline. It was the speed and
scope of the increase in sensation-seeking that predicted
whether the teenagers would break the rules later on.

But while teenage sensation-seeking can lead to trouble, it


can also lead to some of the most important advances in
human culture. Dr. Steinberg argues that adolescence is a
time when the human brain becomes especially “plastic,”
particularly good at learning, especially about the social
world. Adolescence is a crucial period for human
innovation and exploration.
Sensation-seeking helped teenagers explore and conquer
the literal jungles in our evolutionary past—and it could help
them explore and conquer the metaphorical Internet
jungles in our technological future. It can lead young
people to explore not only new hairstyles and vocabulary,
but also new kinds of politics, art, music and philosophy.

So how can worried parents ensure that their children’s


explorations come out well rather than badly? A very recent
study by Dr. Harden’s group provides a bit of solace. The
relationship between sensation-seeking and delinquency
was moderated by two other factors: the teenager’s friends
and the parents’ knowledge of the teen’s activities. When
parents kept track of where their children were and whom
they were with, sensation-seeking was much less likely to
be destructive. Asking the old question, “Do you know
where your children are?” may be the most important way
to make sure that adolescent opportunities outweigh the
crises.

CAMPFIRES HELPED INSPIRE COMMUNITY CULTURE

It’s October, the nights are growing longer and my idea of


heaven is an evening with a child or two, a stack of
storybooks, a pot of cocoa and a good, blazing fire.

But what makes a fireside so entrancing? Why do we still


long for roaring fireplaces, candlelit dinners, campfire
tales? After all, we have simple, efficient electric lights and
gas stoves. Fires are smoky, messy and unhealthy, hard to
start and just as hard to keep going. (The three things every
man is convinced he can do exceptionally well are drive a
car, make love to a woman and start a fire—each with
about the same degree of self-awareness.)

A new paper by Polly Wiessner of the University of Utah


suggests that our longing for the fireside is a deep part of
our evolutionary inheritance. Dr. Wiessner is an
anthropologist who lived with the Ju/’hoansi people in
Botswana and what is now Namibia in the 1970s, when
they still lived by hunting and gathering, much like our
ancestors. She recorded their talk.

Now, in what must be the most poetic article to ever appear


in the Proceedings of the National Academy of Sciences,
she has analyzed all the Ju/’hoansi conversations that
involved at least five people. She compared how they
talked by day to how they talked by night, over the fire.

The daytime talk was remarkably like the conversation in


any modern office. The Ju/’hoansi talked about the work
they had to do, gossiped and made rude jokes. Of the
conversations, 34% were what Dr. Wiessner scientifically
coded as CCC—criticism, complaint and conflict—the
familiar grumbling and grousing, occasionally erupting into
outright hatred, that is apparently the eternal currency of
workplace politics.

But when the sun went down and the men and the women,
the old and the young, gathered around the fire, the talk
was transformed. People told stories 81% of the time—
stories about people they knew, about past generations,
about relatives in distant villages, about goings-on in the
spirit world and even about those bizarre beings called
anthropologists.

Some old men and women, in particular, no longer so


productive by day, became master storytellers by night.
Their enthralled listeners laughed and wept, sang and
danced along, until they drifted off to sleep. Often, at
around 2 a.m., some people would wake up again, stir the
embers and talk some more.

This nighttime talk engaged some of our most distinctively


human abilities—imagination, culture, spirituality. Over the
fire, the Ju/’hoansi talked about people and places that
were far away in space and time and possibility, they
transmitted cultural wisdom and historical knowledge to
the next generation, and they explored the mysterious
psychological nuances of other minds.

The human ability to control fire had transformative


biological effects. It allowed us to extract nourishment
from the toughest roots and roasts, and to feed our hungry,
helpless children, with their big energy-guzzling brains. But
Dr. Wiessner suggests that it had equally transformative
social effects. Fire gave us the evening —too dark to work,
but bright and warm enough to talk.

In other studies, Dr. Whitney has shown that people can


swiftly calculate how happy or sad a crowd is in much the
same way.

Have screens destroyed the evening? The midnight memos


and after-dinner emails do seem to turn the night into just
more of the working day—a poor substitute for the fireside.
On the other hand, perhaps I channel some of the
Ju/’hoansi evening spirit when I gather with my
grandchildren in the dark, in front of the gently flickering
flat screen, and download Judy Garland and Bert Lahr
masterfully telling the story of the wizards and witches of
Oz. (Judy’s singing, in any case, however digital, is
undoubtedly a substantial modern advance over
grandmom’s.)

Still, a few candles, a flame or two, and some s’mores over


the embers might help bring human light and warmth even
to our chilling contemporary forms of darkness.

POVERTY'S VICIOUS CYCLE CAN AFFECT OUR GENES


From the inside, nothing in the world feels more powerful
than our impulse to care for helpless children. But new
research shows that caring for children may actually be
even more powerful than it feels. It may not just influence
children's lives—it may even shape their genes.

As you might expect, the genomic revolution has


completely transformed the nature/nurture debate. What
you might not expect is that it has shown that nurture is
even more important than we thought. Our experiences,
especially our early experiences, don't just interact with our
genes, they actually make our genes work differently.

This might seem like heresy. After all, one of the first things
we learn in Biology 101 is that the genes we carry are
determined the instant we are conceived. And that's true.

But genes are important because they make cells, and the
process that goes from gene to cell is remarkably complex.
The genes in a cell can be expressed differently—they can
be turned on or off, for example—and that makes the cells
behave in completely different ways. That's how the same
DNA can create neurons in your brain and bone cells in
your femur. The exciting new field of epigenetics studies
this process.

One of the most important recent discoveries in biology is


that this process of translating genes into cells can be
profoundly influenced by the environment.

In a groundbreaking 2004 Nature paper, Michael Meaney at


McGill University and his colleagues looked at a gene in
rats that helps regulate how an animal reacts to stress. A
gene can be "methylated" or "demethylated"—a certain
molecule does or doesn't attach to the gene. This changes
the way that the gene influences the cell.
In carefully controlled experiments Dr. Meaney discovered
that early caregiving influenced how much the stress-
regulating gene was methylated. Rats who got less
nuzzling and licking from their mothers had more
methylated genes. In turn, the rats with the methylated
gene were more likely to react badly to stress later on. And
these rats, in turn, were less likely to care for their own
young, passing on the effect to the next generation.

The scientists could carefully control every aspect of the


rats' genes and environment. But could you show the same
effect in human children, with their far more complicated
brains and lives? A new study by Seth Pollak and
colleagues at the University of Wisconsin at Madison in the
journal Child Development does just that. They looked at
adolescents from vulnerable backgrounds, and compared
the genes of children who had been abused and neglected
to those who had not.

Sure enough, they found the same pattern of methylation in


the human gene that is analogous to the rat stress-
regulating gene. Maltreated children had more methylation
than children who had been cared for. Earlier studies show
that abused and neglected children are more sensitive to
stress as adults, and so are more likely to develop
problems like anxiety and depression, but we might not
have suspected that the trouble went all the way down to
their genes.

The researchers also found a familiar relationship between


the socio-economic status of the families and the
likelihood of abuse and neglect: Poverty, stress and
isolation lead to maltreatment.

The new studies suggest a vicious multigenerational circle


that affects a horrifyingly large number of children, making
them more vulnerable to stress when they grow up and
become parents themselves.

Twenty percent of American children grow up in poverty,


and this number has been rising, not falling. Nearly a
million are maltreated. The new studies show that this
damages children, and perhaps even their children's
children, at the most fundamental biological level.

HUMANS NATURALLY FOLLOW CROWD BEHAVIOR

It happened last Sunday at football stadiums around the


country. Suddenly, 50,000 individuals became a single unit,
almost a single mind, focused intently on what was
happening on the field—that particular touchdown grab or
dive into the end zone. Somehow, virtually simultaneously,
each of those 50,000 people tuned into what the other
49,999 were looking at.

Becoming part of a crowd can be exhilarating or terrifying:


The same mechanisms that make people fans can just as
easily make them fanatics. And throughout human history
we have constructed institutions that provide that
dangerous, enthralling thrill. The Coliseum that hosts my
local Oakland Raiders is, after all, just a modern knockoff of
the massive theater that housed Roman crowds cheering
their favorite gladiators 2,000 years ago.

(For Oakland fans, like my family, it's particularly clear that


participating in the Raider Nation is responsible for much
of the games' appeal—it certainly isn't the generally
pathetic football.)

In fact, recent studies suggest that our sensitivity to


crowds is built into our perceptual system and operates in
a remarkably swift and automatic way. In a 2012 paper in
the Proceedings of the National Academy of Sciences, A.C.
Gallup, then at Princeton University, and colleagues looked
at the crowds that gather in shopping centers and train
stations.

In one study, a few ringers simply joined the crowd and


stared up at a spot in the sky for 60 seconds. Then the
researchers recorded and analyzed the movements of the
people around them. The scientists found that within
seconds hundreds of people coordinated their attention in
a highly systematic way. People consistently stopped to
look toward exactly the same spot as the ringers.

The number of ringers ranged from one to 15. People turn


out to be very sensitive to how many other people are
looking at something, as well as to where they look.
Individuals were much more likely to follow the gaze of
several people than just a few, so there was a cascade of
looking as more people joined in.

In a new study in Psychological Science, Timothy Sweeny


at the University of Denver and David Whitney at the
University of California, Berkeley, looked at the
mechanisms that let us follow a crowd in this way. They
showed people a set of four faces, each looking in a
slightly different direction. Then the researchers asked
people to indicate where the whole group was looking (the
observers had to swivel the eyes on a face on a computer
screen to match the direction of the group).

Because we combine head and eye direction in calculating


a gaze, the participants couldn't tell where each face was
looking by tracking either the eyes or the head alone; they
had to combine the two. The subjects saw the faces for
less than a quarter of a second. That's much too short a
time to look at each face individually, one by one.

It sounds impossibly hard. If you try the experiment, you


can barely be sure of what you saw at all. But in fact,
people were amazingly accurate. Somehow, in that split-
second, they put all the faces together and worked out the
average direction where the whole group was looking.

In other studies, Dr. Whitney has shown that people can


swiftly calculate how happy or sad a crowd is in much the
same way.

Other social animals have dedicated brain mechanisms for


coordinating their action—that's what's behind the graceful
rhythms of a flock of birds or a school of fish. It may be
hard to think of the eccentric, gothic pirates of Oakland's
Raider Nation in the same way. A fan I know says that
going to a game is like being plunged into an unusually
friendly and cooperative postapocalyptic dystopia—a
marijuana-mellowed Mad Max.

But our brains seem built to forge a flock out of even such
unlikely materials.

EVEN CHILDREN GET MORE OUTRAGED AT 'THEM' THAN


AT 'US'

From Ferguson to Gaza, this has been a summer of


outrage. But just how outraged people are often seems to
depend on which group they belong to. Polls show that
many more African-Americans think that Michael Brown's
shooting by a Ferguson police officer was unjust than
white Americans. How indignant you are about Hamas
rockets or Israeli attacks that kill civilians often depends on
whether you identify with the Israelis or the Palestinians.
This is true even when people agree about the actual facts.

You might think that such views are a matter of history and
context, and that is surely partly true. But a new study in
the Proceedings of the National Academy of Sciences
suggests that they may reflect a deeper fact about human
nature. Even young children are more indignant about
injustice when it comes from "them" and is directed at "us."
And that is true even when "them" and "us" are defined by
nothing more than the color of your hat.

Jillian Jordan, Kathleen McAuliffe and Felix Warneken at


Harvard University looked at what economists and
evolutionary biologists dryly call "costly third-person norm-
violation punishment" and the rest of us call "righteous
outrage." We take it for granted that someone who sees
another person act unfairly will try to punish the bad guy,
even at some cost to themselves.

From a purely economic point of view, this is puzzling


—after all, the outraged person is doing fine themselves.
But enforcing fairness helps ensure social cooperation, and
we humans are the most cooperative of primates. So does
outrage develop naturally, or does it have to be taught?

The experimenters gave some 6-year-old children a pile of


Skittles candy. Then they told them that earlier on, another
pair of children had played a Skittle-sharing game. For
example, Johnny got six Skittles, and he could choose how
many to give to Henry and how many to keep. Johnny had
either divided the candies fairly or kept them all for himself.

Now the children could choose between two options. If


they pushed a lever to the green side, Johnny and Henry
would keep their Skittles, and so would the child. If they
pushed it to the red side, all six Skittles would be thrown
away, and the children would lose a Skittle themselves as
well. Johnny would be punished, but they would lose too.

When Johnny was fair, the children pushed the lever to


green. But when Johnny was selfish, the children acted as
if they were outraged. They were much more likely to push
the lever to red—even though that meant they would lose
themselves.

How would being part of a group influence these


judgments? The experimenters let the children choose a
team. The blue team wore blue hats, and the yellow team
wore yellow. They also told the children whether Johnny
and Henry each belonged to their team or the other one.

The teams were totally arbitrary: There was no poisonous


past, no history of conflict. Nevertheless, the children
proved more likely to punish Johnny's unfairness if he
came from the other team. They were also more likely to
punish him if Henry, the victim, came from their own team.

As soon as they showed that they were outraged at all, the


children were more outraged by "them" than "us." This is a
grim result, but it fits with other research. Children have
impulses toward compassion and justice—the twin pillars
of morality—much earlier than we would have thought. But
from very early on, they tend to reserve compassion and
justice for their own group.

There was a ray of hope, though. Eight-year-olds turned out


to be biased toward their own team but less biased than
the younger children. They had already seemed to widen
their circle of moral concern beyond people who wear the
same hats. We can only hope that, eventually, the grown-up
circle will expand to include us all.

IN LIFE, WHO WINS, THE FOX OR THE HEDGEHOG?

A philosopher once used an animal metaphor – the clever


fox - to point out the most important feature of certain
especially distinctive thinkers.

It was not, however, Isiah Berlin. Berlin did famously divide


thinkers into the two categories, hedgehogs and foxes. He
based the distinction on a saying by the ancient Greek
philosopher Archilochus “The fox knows many things but
the hedgehog knows one big thing”. Hedgehogs have a
single grand idea they apply to everything, foxes come up
with a new idea for every situation. Berlin said that Plato
and Dostoevsky were hedgehogs, Aristotle and
Shakespeare were foxes.

Berlin later regretted inventing this over-simplified


dichotomy. But it's proved irresistible to writers ever after.
After all, as Robert Benchley, said there are just two kinds
of people in the world, those who think there are just two
kinds of people in the world and those who don’t.

Philosophical and political hedgehogs got most of the


glamour and attention in the twentieth century. But lately
there has been a turn towards foxes. The psychologist
Phillip Tetlock studied expert political predictions and
found that foxy, flexible, pluralistic experts were much more
accurate than the experts with one big hedgehog idea. The
statistics wiz Nate Silver chose a fox as his logo in tribute
to this finding.

But here is a question that Berlin, that archetypal Oxford


don, never considered. What about the babies? What
makes young hedgehogs and foxes turn out the way they
do?

Biologists confirm that Archilocus, got it right, foxes are far


more wily and flexible learners than hedgehogs. But
hedgehogs also have a much shorter childhood than foxes.
Hedgehogs develop their spines, that one big thing, almost
as soon as they are born, and are independent in only six
weeks. Fox cubs still return to the den for six months. As a
result hedgehogs need much less parental care - hedgehog
fathers disappear after they mate. Fox couples, in contrast,
“pair-bond” - the fathers help bring food to the babies.
Baby foxes also play much more than hedgehogs, though
in a slightly creepy way. Fox parents start out by feeding
the babies their own regurgitated food. But then they
actually bring the babies live prey, like mice, when they are
still in the den, and the babies play at hunting them. That
play gives them a chance to practice and develop the
flexible hunting skills and wily intelligence that serve them
so well later on.

In fact, the much earlier, anonymous, philosopher seems to


have understood the behavioral ecology of foxes, and the
link between intelligence, play and parental investment,
rather better than Berlin did. The splendid song The Fox,
beloved by every four-year-old, was first recorded on the
blank flyleaf of a 15th century copy of “Sayings of the
Philosophers”. The Chaucerian philosopher not only
described the clever, sociable carnivore who outwits even
the homo sapiens. But he (or perhaps she?) also noted that
the fox is the kind of creature who brings back the prey to
the little ones in his cozy den. That grey goose was a
source of cognitive practice and skill formation as well as
tasty bones-o.

Berlin doesn’t have much to say about whether Plato the


hedgehog and Aristotle the fox had devoted or deadbeat
Dads, or if they had much playtime as philosopher pups.
Though, of course, the young cubs game of hunting down
terrified live prey while their elders look on approvingly will
seem familiar to those who have attended philosophy
graduate seminars.

DO WE KNOW WHAT WE SEE?

In a shifty world, surely the one thing we can rely on is the


evidence of our own eyes. I may doubt everything else, but I
have no doubts about what I see right now. Even if I'm
stuck in The Matrix, even if the things I see aren't real—I still
know that I see them.

Or do I?

A new paper in the journal Trends in Cognitive Sciences by


the New York University philosopher Ned Block
demonstrates just how hard it is to tell if we really know
what we see. Right now it looks to me as if I see the entire
garden in front of me, each of the potted succulents, all of
the mossy bricks, every one of the fuchsia blossoms. But I
can only pay attention to and remember a few things at a
time. If I just saw the garden for an instant, I'd only
remember the few plants I was paying attention to just
then.

How about all the things I'm not paying attention to? Do I
actually see them, too? It may just feel as if I see the whole
garden because I quickly shift my attention from the
blossoms to the bricks and back.

Every time I attend to a particular plant, I see it clearly. That


might make me think that I was seeing it clearly all along,
like somebody who thinks the refrigerator light is always
on, because it always turns on when you open the door to
look. This "refrigerator light" illusion might make me think I
see more than I actually do.

On the other hand, maybe I do see everything in the


garden—it's just that I can't remember and report everything
I see, only the things I pay attention to. But how can I tell if I
saw something if I can't remember it?

Prof. Block focuses on a classic experiment originally done


in 1960 by George Sperling, a cognitive psychologist at the
University of California, Irvine. (You can try the experiment
yourself online.) Say you see a three-by-three grid of nine
letters flash up for a split second. What letters were they?
You will only be able to report a few of them.

Now suppose the experimenter tells you that if you hear a


high-pitched noise you should focus on the first row, and if
you hear a low-pitched noise you should focus on the last
row. This time, not surprisingly, you will accurately report all
three letters in the cued row, though you can't report the
letters in the other rows.

But here's the trick. Now you only hear the noise after the
grid has disappeared. You will still be very good at
remembering the letters in the cued row. But think about
it—you didn't know beforehand which row you should focus
on. So you must have actually seen all the letters in all the
rows, even though you could only access and report a few
of them at a time. It seems as if we do see more than we
can say.

Or do we? Here's another possibility. We know that people


can extract some information from images they can't
actually see—in subliminal perception, for example.
Perhaps you processed the letters unconsciously, but you
didn't actually see them until you heard the cue. Or perhaps
you just saw blurred fragments of the letters.

Prof. Block describes many complex and subtle further


experiments designed to distinguish these options, and he
concludes that we do see more than we remember.

But however the debate gets resolved, the real moral is the
same. We don't actually know what we see at all! You can
do the Sperling experiment hundreds of times and still not
be sure whether you saw the letters. Philosophers
sometimes argue that our conscious experience can't be
doubted because it feels so immediate and certain. But
scientists tell us that feeling is an illusion, too.
WHY IS IT SO HARD FOR US TO DO NOTHING?

It is summer time, and the living is easy. You can, at last,


indulge in what is surely the most enjoyable of human
activities—doing absolutely nothing. But is doing nothing
really enjoyable? A new study in the journal Science shows
that many people would rather get an electric shock than
just sit and think.

Neuroscientists have inadvertently discovered a lot about


doing nothing. In brain-imaging studies, people lie in a
confined metal tube feeling bored as they wait for the
actual experiment to start. Fortuitously, neuroscientists
discovered that this tedium was associated with a
distinctive pattern of brain activity. It turns out that when
we do nothing, many parts of the brain that underpin
complex kinds of thinking light up.

Though we take this kind of daydreaming for granted, it is


actually a particularly powerful kind of thinking. Much more
than any other animal, we humans have evolved the ability
to live in our own thoughts, detached from the demands of
our immediate actions and experiences. When people lie in
a tube with nothing else to do, they reminisce, reliving
events in the past ("Damn it, that guy was rude to me last
week"), or they plan what they will do in the future ("I'll snub
him next time"). And they fantasize: "Just imagine how
crushed he would have been if I'd made that witty riposte."

Descartes had his most important insights sitting alone in


a closet-sized stove, the only warm spot during a wintry
Dutch military campaign. When someone asked Newton
how he discovered the law of gravity, he replied, "By
thinking on it continually." Doing nothing but thinking can
be profound.

But is it fun? Psychologist Tim Wilson of the University of


Virginia and his colleagues asked college students to sit for
15 minutes in a plain room doing nothing but thinking. The
researchers also asked them to record how well they
concentrated and how much they enjoyed doing it. Most of
the students reported that they couldn't concentrate; half of
them actively disliked the experience.

Maybe that was because of what they thought about.


"Rumination"—brooding on unpleasant experiences, like the
guy who snubbed you—can lead to depression, even
clinical depression. But the researchers found no
difference based on whether people recorded positive or
negative thoughts.

Maybe it was something about the sterile lab room. But the
researchers also got students just to sit and think in their
own homes, and they disliked it even more. In fact, 32% of
the students reported that they cheated, with a sneak peek
at a cellphone or just one quick text.

But that's because they were young whippersnappers with


Twitter-rotted brains, right? Wrong. The researchers also
did the experiment with a middle-aged church group, and
the results were the same. Age, gender, personality, social-
media use—nothing made much difference.

But did people really hate thinking that much? The


researchers gave students a mild electric shock and asked
if they would pay to avoid another. The students sensibly
said that they would. The researchers then put them back
in the room with nothing to do but also gave them the
shock button.

Amazingly, many of them voluntarily shocked themselves


rather than doing nothing. Not so amazingly (at least to this
mother of boys who played hockey), there was a big sex
difference. Sixty-seven percent of the men preferred a
shock to doing nothing, but only 25% of the women did.

Newton and neuroscience suggest that just thinking can be


very valuable. Why is it so hard? It is easy to blame the
modern world, but 1,000 years ago, Buddhist monks had
the same problem. Meditation has proved benefits, but it
takes discipline, practice and effort. Our animal impulse to
be up and doing, or at least up and checking email, is hard
to resist, even in a long, hazy cricket-song dream of a
summer day.

A TODDLER'S SOUFFLES AREN'T JUST CHILD'S PLAY

Augie, my 2-year-old grandson, is working on his soufflés.


This began by accident. Grandmom was trying to
simultaneously look after a toddler and make dessert. But
his delight in soufflé-making was so palpable that it has
become a regular event.

The bar, and the soufflé, rise higher on each visit—each


time he does a bit more and I do a bit less. He graduated
from pushing the Cuisinart button and weighing the
chocolate, to actually cracking and separating the eggs.
Last week, he gravely demonstrated how you fold in egg
whites to his clueless grandfather. (There is some cultural
inspiration from Augie's favorite Pixar hero, Remy the
rodent chef in "Ratatouille," though this leads to rather
disturbing discussions about rats in the kitchen.)

It's startling to see just how enthusiastically and easily a


2-year-old can learn such a complex skill. And it's striking
how different this kind of learning is from the kind children
usually do in school.

New studies in the journal Human Development by Barbara


Rogoff at the University of California, Santa Cruz and
colleagues suggest that this kind of learning may actually
be more fundamental than academic learning, and it may
also influence how helpful children are later on.

Dr. Rogoff looked at children in indigenous Mayan


communities in Latin America. She found that even
toddlers do something she calls "learning by observing and
pitching in." Like Augie with the soufflés, these children
master useful, difficult skills, from making tortillas to using
a machete, by watching the grown-ups around them
intently and imitating the simpler parts of the process.
Grown-ups gradually encourage them to do more—the
pitching-in part. The product of this collaborative learning
is a genuine contribution to the family and community: a
delicious meal instead of a standardized test score.

This kind of learning has some long-term consequences,


Dr. Rogoff suggests. She and her colleagues also looked at
children growing up in Mexico City who either came from
an indigenous heritage, where this kind of observational
learning is ubiquitous, or a more Europeanized tradition.
When they were 8 the children from the indigenous
traditions were much more helpful than the Europeanized
children: They did more work around the house, more
spontaneously, including caring for younger siblings. And
children from an indigenous heritage had a fundamentally
different attitude toward helping. They didn't need to be
asked to help—instead they were proud of their ability to
contribute.

The Europeanized children and parents were more likely to


negotiate over helping. Parents tried all kinds of different
contracts and bargains, and different regimes of rewards
and punishments. Mostly, as readers will recognize with a
sigh, these had little effect. For these children, household
chores were something that a grown-up made you do, not
something you spontaneously contributed to the family.
Dr. Rogoff argues that there is a connection between such
early learning by pitching in and the motivation and ability
of school-age children to help. In the indigenous-tradition
families, the toddler's enthusiastic imitation eventually
morphed into real help. In the more Europeanized families,
the toddler's abilities were discounted rather than
encouraged.

The same kind of discounting happens in my middle-class


American world. After all, when I make the soufflé without
Augie's help there's a much speedier result and a lot less
chocolate fresco on the walls. And it's true enough that in
our culture, in the long run, learning to make a good soufflé
or to help around the house, or to take care of a baby, may
be less important to your success as an adult than more
academic abilities.

But by observing and pitching in, Augie may be learning


something even more fundamental than how to turn eggs
and chocolate into soufflé. He may be learning how to turn
into a responsible grown-up himself.

FOR POOR KIDS, NEW PROOF THAT EARLY HELP IS KEY

Twenty years ago, I would have said that social policies


meant to help very young children are intrinsically valuable.
If improving the lives of helpless, innocent babies isn't a
moral good all by itself, what is? But I also would have said,
as a scientist, that it would be really hard, perhaps
impossible, to demonstrate the long-term economic
benefits of those policies. Human development is a
complicated, interactive and unpredictable business.

Individual children are all different. Early childhood


experience is especially important, but it's not, by any
means, the end of the story. Positive early experiences
don't inoculate you against later deprivation; negative ones
don't doom you to inevitable ruin. And determining the long-
term effects of any social policy is notoriously difficult.
Controlled experiments are hard, different programs may
have different effects, and unintended consequences
abound.

I still think I was right on the first point: The moral case for
early childhood programs shouldn't depend on what
happens later. But I was totally, resoundingly, dramatically
wrong about whether one could demonstrate long-term
effects. In fact, over the last 20 years, an increasing
number of studies—many from hardheaded economists at
business schools—have shown that programs that make
life better for young children also have long-term economic
benefits.

The most recent such study was published in the May 30


issue of Science. Paul Gertler, of the National Bureau of
Economic Research and the Haas School of Business of
the University of California, Berkeley, and colleagues looked
at babies growing up in Jamaica. (Most earlier studies had
just looked at children in the U.S.) These children were so
poor that they were "nutritionally stunted"—that is, they had
physical symptoms of malnourishment.

Health aides visited one group of babies every week for


two years, starting at age 1. The aides themselves played
with the babies and helped encourage the parents to play
with them in stimulating ways. Another randomly
determined group just got weekly nutritional supplements,
a third received psychological and nutritional help, and a
fourth group was left alone.

Twenty years later, when the children had grown up, the
researchers returned and looked at their incomes. The
young adults who had gotten the early psychological help
had significantly higher incomes than those who hadn't. In
fact, they earned 25% more than the control group, even
including the children who had just gotten better food.

This study and others like it have had some unexpected


findings. The children who were worst off to begin with
reaped the greatest benefits. And the early interventions
didn't just influence grades. The children who had the early
help ended up spending more time in school, and doing
better there, than the children who didn't. But the research
has found equally important effects on earnings, physical
health and even crime. And interventions that focus on
improving early social interactions may be as important
and effective as interventions focused on academic skills.

The program influenced the parents too, though in subtle


ways. The researchers didn't find any obvious differences
in the ways that parents treated their children when they
were 7 and 11 years old. It might have looked as if the
effects of the intervention had faded away.

Nevertheless, the parents of children who had had the


psychological help were significantly more likely to
immigrate to another country later on. Those health visits
stopped when the children were only 4. But both the
parents and the children seemed to gain a new sense of
opportunity that could change their whole lives.

I'm really glad I was so wrong. In the U.S., 20% of children


still grow up in poverty. The self-evident moral arguments
for helping those children have fueled the movement
toward early childhood programs in red Oklahoma and
Georgia as well as blue New York and Massachusetts. But
the scientific and economic arguments have become just
as compelling.

RICE, WHEAT AND THE VALUES THEY SOW


Could what we eat shape how we think? A new paper in the
journal Science by Thomas Talhelm at the University of
Virginia and colleagues suggests that agriculture may
shape psychology. A bread culture may think differently
than a rice-bowl society.

Psychologists have long known that different cultures tend


to think differently. In China and Japan, people think more
communally, in terms of relationships. By contrast, people
are more individualistic in what psychologist Joseph
Henrich, in commenting on the new paper, calls "WEIRD
cultures."

WEIRD stands for Western, educated, industrialized, rich


and democratic. Dr. Henrich's point is that cultures like
these are actually a tiny minority of all human societies,
both geographically and historically. But almost all
psychologists study only these WEIRD folks.

The differences show up in surprisingly varied ways.


Suppose I were to ask you to draw a graph of your social
network, with you and your friends represented as circles
attached by lines. Americans make their own circle a
quarter-inch larger than their friends' circles. In Japan,
people make their own circle a bit smaller than the others.

Or you can ask people how much they would reward the
honesty of a friend or a stranger and how much they would
punish their dishonesty. Most Easterners tend to say they
would reward a friend more than a stranger and punish a
friend less; Westerners treat friends and strangers more
equally.

These differences show up even in tests that have nothing


to do with social relationships. You can give people a
"Which of these things belongs together?" problem, like the
old "Sesame Street" song. Say you see a picture of a dog, a
rabbit and a carrot. Westerners tend to say the dog and the
rabbit go together because they're both animals—they're in
the same category. Easterners are more likely to say that
the rabbit and the carrot go together—because rabbits eat
carrots.

None of these questions has a right answer, of course. So


why have people in different parts of the world developed
such different thinking styles?

You might think that modern, industrial cultures would


naturally develop more individualism than agricultural ones.
But another possibility is that the kind of agriculture
matters. Rice farming, in particular, demands a great deal
of coordinated labor. To manage a rice paddy, a whole
village has to cooperate and coordinate irrigation systems.
By contrast, a single family can grow wheat.

Dr. Talhelm and colleagues used an ingenious design to


test these possibilities. They looked at rice-growing and
wheat-growing regions within China. (The people in these
areas had the same language, history and traditions; they
just grew different crops.) Then they gave people the
psychological tests I just described. The people in wheat-
growing areas looked more like WEIRD Westerners, but the
rice growers showed the more classically Eastern
communal and relational patterns. Most of the people they
tested didn't actually grow rice or wheat themselves, but
the cultural traditions of rice or wheat seemed to influence
their thinking.

This agricultural difference predicted the psychological


differences better than modernization did. Even
industrialized parts of China with a rice-growing history
showed the more communal thinking pattern.

The researchers also looked at two measures of what


people do outside the lab: divorces and patents for new
inventions. Conflict-averse communal cultures tend to have
fewer divorces than individualistic ones, but they also
create fewer individual innovations. Once again, wheat-
growing areas looked more "WEIRD" than rice-growing
ones.

In fact, Dr. Henrich suggests that rice-growing may have led


to the psychological differences, which in turn may have
sparked modernization. Aliens from outer space looking at
the Earth in the year 1000 would never have bet that
barbarian Northern Europe would become industrialized
before civilized Asia. And they would surely never have
guessed that eating sandwiches instead of stir-fry might
make the difference.

THE WIDE REACH OF BABIES' WEBS OF ADORABLENESS

We've all seen the diorama in the natural history museum:


the mighty cave men working together to bring down the
mastodon. For a long time, evolutionary biologists pointed
to guy stuff like hunting and warfare to explain the
evolution of human cooperation.

But a recent research symposium at the University of


California, San Diego, suggests that the children watching
inconspicuously at the back of the picture may have been
just as important. Caring for children may, literally, have
made us human—and allowed us to develop our distinctive
abilities for cognition, cooperation and culture. The same
sort of thinking suggests that human mothering goes way
beyond mothers. (You can see a video here.).

The anthropologist Sarah Hrdy argued that human


evolution depends on the emergence of "cooperative
breeding." Chimpanzee babies are exclusively cared for by
their biological mothers; they'll fight off anyone else who
comes near their babies. We humans, in contrast, have
developed a caregiving triple threat: Grandmothers, fathers
and "alloparents" help take care of babies. That makes us
quite different from our closest primate relatives.

In my last column, I talked about the fascinating new


research on grandmothers. The fact that fathers take care
of kids may seem more obvious, but it also makes us
distinctive. Humans "pair bond" in a way that most
primates—indeed, most mammals—don't. Fathers and
mothers develop close relationships, and we are
substantially more monogamous than any of our close
primate relatives. As in most monogamous species, even
sorta-kinda-monogamous ones like us, human fathers help
to take care of babies.

Father care varies more than mother care. Even in hunter-


gatherer or forager societies, some biological fathers are
deeply involved in parenting, while others do very little. For
fathers, even more than for mothers, the very fact of
intimacy with babies is what calls out the impulse to care
for them. For example, when fathers touch and play with
babies, they produce as much oxytocin (the "tend and
befriend" hormone) as mothers do.

Humans also have "alloparents"—other adults who take


care of babies even when they aren't related to them. In
forager societies, those alloparents are often young women
who haven't yet had babies themselves. Caring for other
babies lets these women learn child-care skills while
helping the babies to survive. Sometimes mothers swap
caregiving, helping each other out. If you show pictures of
especially cute babies to women who don't have children,
the reward centers of their brains light up (though we really
didn't need the imaging studies to conclude that cute
babies are irresistible to just about everybody).
Dr. Hrdy thinks that this cooperative breeding strategy is
what let us develop other distinctive human abilities. A lot
of our human smartness is social intelligence; we're
especially adept at learning about and from other people.
Even tiny babies who can't sit up yet can smile and make
eye contact, and studies show that they can figure out what
other people want.

Dr. Hrdy suggests that cooperative breeding came first and


that the extra investment of grandmothers, fathers and
alloparents permitted the long human childhood that in
turn allowed learning and culture. In fact, social intelligence
may have been a direct result of the demands of
cooperative breeding. As anybody who has carpooled can
testify, organizing joint child care is just as cognitively
challenging as bringing down a mastodon.

What's more, Dr. Hrdy suggests that in a world of


cooperative breeding, babies became the agents of their
own survival. The weapons-grade cuteness of human
babies goes beyond their big eyes and fat cheeks. Babies
first use their social intelligence to actively draw dads and
grandmoms and alloparents into their web of
adorableness. Then they can use it to do all sorts of other
things—even take down a mastodon or two.

GRANDMOTHERS: THE BEHIND-THE-SCENES KEY TO


HUMAN CULTURE?

Why do I exist? This isn't a philosophical cri de coeur; it's an


evolutionary conundrum. At 58, I'm well past menopause,
and yet I'll soldier on, with luck, for many years more. The
conundrum is more vivid when you realize that human
beings (and killer whales) are the only species where
females outlive their fertility. Our closest primate relatives
—chimpanzees, for example—usually die before their 50s,
when they are still fertile.
It turns out that my existence may actually be the key to
human nature. This isn't a megalomaniacal boast but a
new biological theory: the "grandmother hypothesis."
Twenty years ago, the anthropologist Kristen Hawkes at the
University of Utah went to study the Hadza, a forager group
in Africa, thinking that she would uncover the origins of
hunting. But then she noticed the many wiry old women
who dug roots and cooked dinners and took care of babies
(much like me, though my root-digging skills are restricted
to dividing the irises). It turned out that these old women
played an important role in providing nutrition for the group,
as much as the strapping young hunters. What's more,
those old women provided an absolutely crucial resource
by taking care of their grandchildren. This isn't just a
miracle of modern medicine. Our human life expectancy is
much longer than it used to be—but that's because far
fewer children die in infancy. Anthropologists have looked
at life spans in hunter-gatherer and forager societies, which
are like the societies we evolved in. If you make it past
childhood, you have a good chance of making it into your
60s or 70s.

There are many controversies about what happened in


human evolution. But there's no debate that there were two
dramatic changes in what biologists call our "life-history":
Besides living much longer than our primate relatives, our
babies depend on adults for much longer.

Young chimps gather as much food as they eat by the time


they are 7 or so. But even in forager societies, human
children pull their weight only when they are teenagers.
Why would our babies be helpless for so long? That long
immaturity helps make us so smart: It gives us a long
protected time to grow large brains and to use those brains
to learn about the world we live in. Human beings can learn
to adapt to an exceptionally wide variety of environments,
and those skills of learning and culture develop in the early
years of life.

But that immaturity has a cost. It means that biological


mothers can't keep babies going all by themselves: They
need help. In forager societies grandmothers provide a
substantial amount of child care as well as nutrition. Barry
Hewlett at Washington State University and his colleagues
found, much to their surprise, that grandmothers even
shared breast-feeding with mothers. Some grandmoms
just served as big pacifiers, but some, even after
menopause, could "relactate," actually producing milk.
(Though I think I'll stick to the high-tech, 21st-century
version of helping to feed my 5-month-old granddaughter
with electric pumps, freezers and bottles.)

Dr. Hawkes's "grandmother hypothesis" proposes that


grandmotherhood developed in tandem with our long
childhood. In fact, she argues that the evolution of
grandmothers was exactly what allowed our long
childhood, and the learning and culture that go with it, to
emerge. In mathematical models, you can see what
happens if, at first, just a few women live past menopause
and use that time to support their grandchildren (who, of
course, share their genes). The "grandmother trait" can
rapidly take hold and spread. And the more grandmothers
contribute, the longer the period of immaturity can be.

So on Mother's Day this Sunday, as we toast mothers over


innumerable Bloody Marys and Eggs Benedicts across the
country, we might add an additional toast for the gray-
haired grandmoms behind the scenes.

SEE JANE EVOLVE: PICTURE BOOKS EXPLAIN DARWIN

Evolution by natural selection is one of the best ideas in all


of science. It predicts and explains an incredibly wide range
of biological facts. But only 60% of Americans believe
evolution is true. This may partly be due to religious
ideology, of course, but studies show that many secular
people who say they believe in evolution still don't really
understand it. Why is natural selection so hard to
understand and accept? What can we do to make it easier?

A new study in Psychological Science by Deborah Kelemen


of Boston University and colleagues helps to explain why
evolution is hard to grasp. It also suggests that we should
teach children the theory of natural selection while they are
still in kindergarten instead of waiting, as we do now, until
they are teenagers.

Scientific ideas always challenge our common sense. But


some ideas, such as the heliocentric solar system, require
only small tweaks to our everyday knowledge. We can
easily understand what it would mean for the Earth to go
around the sun, even though it looks as if the sun is going
around the Earth. Other ideas, such as relativity or quantum
mechanics, are so wildly counterintuitive that we shrug our
shoulders, accept that only the mathematicians will really
get it and fall back on vague metaphors.

But evolution by natural selection occupies a not-so-sweet


spot between the intuitive and the counterintuitive. The
trouble is that it's almost, but not really, like intentional
design, and that's confusing. Adaptation through natural
selection, like intentional design, makes things work better.
But the mechanism that leads to that result is very
different.

Intentional design is an excellent everyday theory of human


artifacts. If you wanted to explain most of the complicated
objects in my living room, you would say that somebody
intentionally designed them to provide light or warmth or a
place to put your drink—and you'd be right. Even babies
understand that human actions are "teleological"
—designed to accomplish particular goals. In earlier work,
Dr. Kelemen showed that preschoolers begin to apply this
kind of design thinking more generally, an attitude she calls
"promiscuous teleology."

By elementary-school age, children start to invoke an


ultimate God-like designer to explain the complexity of the
world around them—even children brought up as atheists.
Kids aged 6 to 10 have developed their own coherent "folk
biological" theories. They explain biological facts in terms
of intention and design, such as the idea that giraffes
develop long necks because they are trying to reach the
high leaves.

Dr. Kelemen and her colleagues thought that they might be


able to get young children to understand the mechanism of
natural selection before the alternative intentional-design
theory had become too entrenched. They gave 5- to 8-year-
olds 10-page picture books that illustrated an example of
natural selection. The "pilosas," for example, are fictional
mammals who eat insects. Some of them had thick trunks,
and some had thin ones. A sudden change in the climate
drove the insects into narrow underground tunnels. The
thin-trunked pilosas could still eat the insects, but the ones
with thick trunks died. So the next generation all had thin
trunks.

Before the children heard the story, the experimenters


asked them to explain why a different group of fictional
animals had a particular trait. Most of the children gave
explanations based on intentional design. But after the
children heard the story, they answered similar questions
very differently: They had genuinely begun to understand
evolution by natural selection. That understanding
persisted when the experimenters went back three months
later.

One picture book, of course, won't solve all the problems of


science education. But these results do suggest that
simple story books like these could be powerful intellectual
tools. The secret may be to reach children with the right
theory before the wrong one is too firmly in place.

SCIENTISTS STUDY WHY STORIES EXIST

We human beings spend hours each day telling and hearing


stories. We always have. We’ve passed heroic legends
around hunting fires, kitchen tables and the web, and told
sad tales of lost love on sailing ships, barstools and cell
phones. We’ve been captivated by Oedipus and Citizen
Kane and Tony Soprano.

Why? Why not just communicate information through


equations or lists of facts? Why is it that even when we tell
the story of our own random, accidental lives we impose
heroes and villains, crises and resolutions?

You might think that academic English and literature


departments, departments that are devoted to stories,
would have tried to answer this question or would at least
want to hear from scientists who had. But, for a long time,
literary theory was dominated by zombie ideas that had
died in the sciences. Marx and Freud haunted English
departments long after they had disappeared from
economics and psychology.

Recently, though, that has started to change. Literary


scholars are starting to pay attention to cognitive science
and neuroscience. Admittedly, some of the first attempts
were misguided and reductive – “evolutionary psychology”
just-so stories or efforts to locate literature in a particular
brain area. But the conversation between literature and
science is becoming more and more sophisticated and
interesting.

At a fascinating workshop at Stanford last month called


“The Science of Stories” scientists and scholars talked
about why reading Harlequin romances may make you
more empathetic, about how ten-year-olds create the
fantastic fictional worlds called “paracosms”, and about
the subtle psychological inferences in the great Chinese
novel, the Story of the Stone.

One of the most interesting and surprising results came


from the neuroscientist Uri Hasson at Princeton. As
techniques for analyzing brain-imaging data have gotten
more sophisticated, neuroscientists have gone beyond
simply mapping particular brain regions to particular
psychological functions. Instead, they use complex
mathematical analyses to look for patterns in the activity of
the whole brain as it changes over time. Hasson and his
colleagues have gone beyond even that. They measure the
relationship between the pattern in one person’s brain and
the pattern in another’s.

They’ve been especially interested in how brains respond to


stories, whether they’re watching a Clint Eastwood movie,
listening to a Salinger short story, or just hearing
someone’s personal “How We Met” drama. When different
people watched the same vivid story as they lay in the
scanner -- “The Good, the Bad and the Ugly”, for instance, --
their brain activity unfolded in a remarkably similar way.
Sergio Leone really knew how to get into your head.

In another experiment they recorded the pattern of one


person’s brain activity as she told a vivid personal story.
Then someone else listened to the story on tape and they
recorded his brain activity. Again, there was a remarkable
degree of correlation between the two brain patterns. The
storyteller, like Leone, had literally gotten in to the listener’s
brain and altered it in predictable ways. But more than that,
she had made the listener’s brain match her own brain.

The more tightly coupled the brains became, the more the
listener said that he understood the story. This coupling
effect disappeared if you scrambled the sentences in the
story. There was something about the literary coherence of
the tale that seemed to do the work.

One of my own favorite fictions, Star Trek, often includes


stories about high-tech telepathic mind-control. Some alien
has special powers that allows them to shape another
person’s brain activity to match their own, or that produces
brains that are so tightly linked that you can barely
distinguish them. Hasson’s results suggest that we lowly
humans are actually as good at mind-melding as the
Vulcans or the Borg. We just do it with stories.

THE KID WHO WOULDN'T LET GO OF 'THE DEVICE'

How does technology reshape our children’s minds and


brains? Here is a disturbing story from the near future.

They gave her The Device when she was only two. It
worked through a powerful and sophisticated optic nerve
brain-mind interface, injecting it’s content into her cortex.
By the time she was five, she would immediately be swept
away into the alternate universe that the device created.
Throughout her childhood, she would become entirely
oblivious to her surroundings in its grip, for hours at a time.
She would surreptitiously hide it under her desk at school,
and reach for it immediately as soon as she got home. By
adolescence, the images of the device – a girl entering a
ballroom, a man dying on a battlefield – were more vivid to
her than her own memories.
As a grown woman her addiction to The Device continued.
It dominated every room of her house, even the bathroom.
Its images filled her head even when she made love. When
she travelled, her first thought was to be sure that she had
access to The Device and she was filled with panic at the
thought that she would have to spend a day without it.
When her child broke his arm, she paused to make sure
that The Device would be with her in the emergency room.
Even sadder, as soon as her children were old enough she
did her very best to connect them to The Device, too.

The psychologists and neuroscientists showed just how


powerful The Device had become. Psychological studies
showed that its users literally could not avoid entering its
world, the second they made contact their brains
automatically and involuntarily engaged with it. More, large
portions of their brains that had originally been designed
for other purposes had been hijacked to the exclusive
service of The Device.

Well, anyway, I hope that this is a story of the near future. It


certainly is a story of the near past. The Device, you see, is
the printed book, and the story is my autobiography.

Socrates was the first to raise the alarm about this


powerful new technology – he argued, presciently, that the
rise of reading would destroy the old arts of memory and
discussion.

The latest Device to interface with my retina is “Its


Complicated: The Social Networked Life of Teens” by
Danah Boyd at NYU and Microsoft Research. Digital social
network technologies play as large a role in the lives of
current children as books once did for me. Boyd spent
thousands of hours with teenagers from many different
backgrounds, observing the way they use technology and
talking to them about what technology meant to them.
Her conclusion is that young people use social media to do
what they have always done – establish a community of
friends and peers, distance themselves from their parents,
flirt and gossip, bully, experiment, rebel. At the same time,
she argues that the technology does make a difference,
just as the book, the printing press and the telegraph did.
An ugly taunt that once dissolved in the fetid locker-room
air can travel across the world in a moment, and linger
forever. Teenagers must learn to reckon with and navigate
those new aspects of our current technologies, and for the
most part that’s just what they do.

Boyd thoughtfully makes the case against both the


alarmists and the techtopians. The kids are all right or at
least as all right as kids have ever been.

So why all the worry? Perhaps it’s because of the inevitable


difference between looking forward towards generational
changes or looking back at them. As the parable of The
Device illustrates we always look at our children’s future
with equal parts unjustified alarm and unjustified hope –
utopia and dystopia. We look at our own past with wistful
nostalgia. It may be hard to believe but Boyd’s book
suggests that someday even Facebook will be a fond
memory.

WHY YOU'RE NOT AS CLEVER AS A 4-YEAR-OLD

Are young children stunningly dumb or amazingly smart?


We usually think that children are much worse at solving
problems than we are. After all, they can’t make lunch or tie
their shoes, let alone figure out long division or ace the
SAT’s. But, on the other hand, every parent finds herself
exclaiming “Where did THAT come from!” all day long.

So we also have a sneaking suspicion that children might


be a lot smarter than they seem. A new study from our lab
that just appeared in the journal Cognition shows that four-
year-olds may actually solve some problems better than
grown-ups do.

Chris Lucas, Tom Griffiths, Sophie Bridgers and I wanted to


know how preschoolers learn about cause and effect. We
used a machine that lights up when you put some
combinations of blocks on it and not others. Your job is to
figure out which blocks make it go. (Actually, we secretly
activate the machine with a hidden pedal. but fortunately
nobody ever guesses that).

Try it yourself. Imagine that you, a clever grown-up, see me


put a round block on the machine three times. Nothing
happens. But when I put a square block on next to the
round one the machine lights up. So the square one makes
it go and the round one doesn’t, right?

Well, not necessarily. That’s true if individual blocks light up


the machine. That’s the obvious idea and the one that
grown-ups always think of first. But the machine could also
work in a more unusual way. It could be that it takes a
combination of two blocks to make the machine go, the
way that my annoying microwave will only go if you press
both the “cook” button and the “start” button. Maybe the
square and round blocks both contribute, but they have to
go on together.

Suppose I also show you that a triangular block does


nothing and a rectangular one does nothing, but the
machine lights up when you put them on together. That
should tell you that the machine follows the unusual
combination rule instead of the obvious individual block
rule. Will that change how you think about the square and
round blocks?

We showed patterns like these to kids ages 4 and 5 as well


as to Berkeley undergraduates. First we showed them the
triangle/rectangle kind of pattern, which suggested that the
machine might use the unusual combination rule. Then we
showed them the ambiguous round/square kind of pattern.

The kids got it. They figured out that the machine might
work in this unusual way and so that you should put both
blocks on together. But the best and brightest students
acted as if the machine would always follow the common
and obvious rule, even when we showed them that it might
work differently.

Does this go beyond blocks and machines? We think it


might reflect a much more general difference between
children and adults. Children might be especially good at
thinking about unlikely possibilities. After all, grown-ups
know a tremendous amount about how the world works. It
makes sense that we mostly rely on what we already know.

In fact, computer scientists talk about two different kinds


of learning and problem solving – “exploit” versus “explore.”
In “exploit” learning we try to quickly find the solution that
is most likely to work right now. In “explore” learning we try
out lots of possibilities, including unlikely ones, even if they
may not have much immediate pay-off. To thrive in a
complicated world you need both kinds of learning.

A particularly effective strategy is to start off exploring and


then narrow in to exploit. Childhood, especially our
unusually long and helpless human childhood, may be
evolution’s way of balancing exploration and exploitation.
Grown-ups stick with the tried and true; 4-year-olds have
the luxury of looking for the weird and wonderful.

ARE SCHOOLS ASKING TO DRUG KIDS FOR BETTER TEST


SCORES?
In the past two decades, the number of children diagnosed
with Attention Deficit Hyperactivity Disorder has nearly
doubled. One in five American boys receives a diagnosis by
age 17. More than 70% of those who are diagnosed
—millions of children—are prescribed drugs.

A new book, "The ADHD Explosion" by Stephen Hinshaw


and Richard Scheffler, looks at this extraordinary increase.
What's the explanation? Some rise in environmental toxins?
Worse parenting? Better detection?

Many people have suspected that there is a relationship


between the explosion in ADHD diagnoses and the push by
many states, over this same period, to evaluate schools
and teachers based on test scores. But how could you tell?
It could just be a coincidence that ADHD diagnoses and
high-stakes testing have both increased so dramatically.
Drs. Hinshaw and Scheffler—both of them at the University
of California, Berkeley, my university—present some striking
evidence that the answer lies, at least partly, in changes in
educational policy.

Drs. Hinshaw and Scheffler used a kind of "natural


experiment." Different parts of the country introduced new
educational policies at different times. The researchers
looked at the relationship between when a state introduced
the policies and the rate of ADHD diagnoses. They found
that right after the policies were introduced, the diagnoses
increased dramatically. Moreover, the rise was particularly
sharp for poor children in public schools.

The authors suggest that when schools are under pressure


to produce high test scores, they become motivated,
consciously or unconsciously, to encourage ADHD
diagnoses—either because the drugs allow low-performing
children to score better or because ADHD diagnoses can
be used to exclude children from testing. They didn't see
comparable increases in places where the law kept school
personnel from recommending ADHD medication to
parents.

These results have implications for the whole way we think


about ADHD. We think we know the difference between a
disease and a social problem. A disease happens when a
body breaks or is invaded by viruses or bacteria. You give
patients the right treatment, and they are cured. A social
problem—poverty, illiteracy, crime—happens when
institutions fail, when instead of helping people to thrive
they make them miserable.

Much debate over ADHD has focused on whether it is a


disease or a problem, "biological" or "social." But the
research suggests that these are the wrong categories.
Instead, it seems there is a biological continuum among
children. Some have no trouble achieving even "unnatural"
levels of highly focused attention, others find it nearly
impossible to focus attention at all, and most are
somewhere in between.

That variation didn't matter much when we were hunters or


farmers. But in our society, it matters terrifically. School is
more essential for success, and a particular kind of highly
focused attention is more essential for school.

Stimulant drugs don't "cure" a disease called ADHD, the way


that antibiotics cure pneumonia. Instead, they seem to shift
attentional abilities along that continuum. They make
everybody focus better, though sometimes with serious
costs. For children at the far end of the continuum, the
drugs may help make the difference between success and
failure, or even life and death. But the drugs also lead to
more focused attention, even in the elite college students
who pop Adderall before an exam, risking substance abuse
in the mad pursuit of even better grades.
For some children the benefits of the drugs may outweigh
the drawbacks, but for many more the drugs don't help and
may harm. ADHD is both biological and social, and altering
medical and educational institutions could help children
thrive. Behavioral therapies can be very effective, but our
medical culture makes it much easier to prescribe a pill.
Instead of drugging children's brains to get them to fit our
schools, we could change our schools to accommodate a
wider range of children's brains.

THE PSYCHEDELIC ROAD TO OTHER CONSCIOUS STATES

How do a few pounds of gray goo in our skulls create our


conscious experience—the blue of the sky, the tweet of the
birds? Few questions are so profound and important—or so
hard. We are still very far from an answer. But we are
learning more about what scientists call "the neural
correlates of consciousness," the brain states that
accompany particular kinds of conscious experience.

Most of these studies look at the sort of conscious


experiences that people have in standard FMRI brain-scan
experiments or that academics like me have all day long:
bored woolgathering and daydreaming punctuated by
desperate bursts of focused thinking and problem-solving.
We've learned quite a lot about the neural correlates of
these kinds of consciousness.

But some surprising new studies have looked for the


correlates of more exotic kinds of consciousness.
Psychedelic drugs such as LSD were designed to be used
in scientific research and, potentially at least, as therapy for
mental illness. But of course, those drugs long ago
escaped from the lab into the streets. They disappeared
from science as a result. Recently, though, scientific
research on hallucinogens has been making a comeback.
Robin Carhart-Harris at Imperial College London and his
colleagues review their work on psychedelic neuroscience
in a new paper in the journal Frontiers in Neuroscience.
Like other neuroscientists, they put people in FMRI brain
scanners. But these scientists gave psilocybin—the active
ingredient in consciousness-altering "magic mushrooms"—
to volunteers with experience with psychedelic drugs.
Others got a placebo. The scientists measured both
groups' brain activity.

Normally, when we introspect, daydream or reflect, a group


of brain areas called the "default mode network" is
particularly active. These areas also seem to be connected
to our sense of self. Another brain-area group is active
when we consciously pay attention or work through a
problem. In both rumination and attention, parts of the
frontal cortex are particularly involved, and there is a lot of
communication and coordination between those areas and
other parts of the brain.

Some philosophers and neuroscientists have argued that


consciousness itself is the result of this kind of
coordinated brain activity. They think consciousness is
deeply connected to our sense of the self and our
capacities for reflection and control, though we might have
other fleeting or faint kinds of awareness.

But what about psychedelic consciousness? Far from faint


or fleeting, psychedelic experiences are more intense, vivid
and expansive than everyday ones. So you might expect to
see that the usual neural correlates of consciousness
would be especially active when you take psilocybin. That's
just what the scientists predicted. But consistently, over
many experiments, they found the opposite. On psilocybin,
the default mode network and frontal control systems were
actually much less active than normal, and there was much
less coordination between different brain areas. In fact,
"shroom" consciousness looked neurologically like the
inverse of introspective, reflective, attentive
consciousness.

The researchers also got people to report on the quality of


their psychedelic experiences. The more intense the
experiences were and particularly, the more that people
reported that they had lost the sense of a boundary
between themselves and the world, the more they showed
the distinctive pattern of deactivation.

Dr. Carhart-Harris and colleagues suggest the common


theory that links consciousness and control is wrong.
Instead, much of the brain activity accompanying workaday
consciousness may be devoted to channeling, focusing
and even shutting down experience and information, rather
than creating them. The Carhart-Harris team points to other
uncontrolled but vivid kinds of consciousness such as
dreams, mystical experiences, early stages of psychosis
and perhaps even infant consciousness as parallels to
hallucinogenic drug experience.

To paraphrase Hamlet, it turns out that there are more, and


stranger, kinds of consciousness than are dreamt of in our
philosophy.

TIME TO RETIRE THE SIMPLICITY OF NATURE VS.


NURTURE

Are we moral by nature or as a result of learning and


culture? Are men and women “hard-wired” to think
differently? Do our genes or our schools make us
intelligent? These all seem like important questions, but
maybe they have no good scientific answer.

Once, after all, it seemed equally important to ask whether


light was a wave or a particle, or just what arcane force
made living things different from rocks. Science didn’t
answer these questions—it told us they were the wrong
questions to ask. Light can be described either way; there
is no single cause of life.

Every year on the Edge website the intellectual impresario


and literary agent John Brockman asks a large group of
thinkers to answer a single question. (Full disclosure:
Brockman Inc. is my agency.) This year, the question is
about which scientific ideas should be retired.

Surprisingly, many of the writers gave a similar answer:


They think that the familiar distinction between nature and
nurture has outlived its usefulness.

Scientists who focus on the “nature” side of the debate


said that it no longer makes sense to study “culture” as an
independent factor in human development. Scientists who
focus on learning, including me, argued that “innateness”
(often a synonym for nature) should go. But if you read
these seemingly opposed answers more closely, you can
see a remarkable degree of convergence.

Scientists have always believed that the human mind must


be the result of some mix of genes and environment, innate
structure and learning, evolution and culture. But it still
seemed that these were different causal forces that
combined to shape the human mind, and we could assess
the contribution of each one separately. After all, you can’t
have water without both hydrogen and oxygen, but it’s
straightforward to say how the two elements are
combined.

As many of the writers in the Edge symposium point out,


however, recent scientific advances have made the very
idea of these distinctions more dubious.
One is the explosion of work in the field of epigenetics. It
turns out that there is a long and circuitous route, with
many feedback loops, from a particular set of genes to a
feature of the adult organism. Epigenetics explores the way
that different environments shape this complex process,
including whether a gene is expressed at all.

A famous epigenetic study looked at two different strains


of mice. The mice in each strain were genetically identical
to each other. Normally, one strain is much smarter than
the other. But then the experimenters had the mothers of
the smart strain raise the babies of the dumb strain. The
babies not only got much smarter, they passed this
advantage on to the next generation.

So were the mice’s abilities innate or learned? The result of


nature or nurture? Genes or environment? The question just
doesn’t make sense.

New theories of human evolution and culture have also


undermined these distinctions. The old evolutionary
psychology suggested that we had evolved with very
specific “modules”—finely calibrated to a particular Stone
Age environment.

But new research has led biologists to a different view. We


didn’t adapt to a particular Stone Age environment. We
adapted to a newly unpredictable and variable world. And
we did it by developing new abilities for cultural
transmission and change. Each generation could learn new
skills for coping with new environments and could pass
those skills on to the next generation.

As the anthropologist Pascal Boyer points out in his


answer, it’s tempting to talk about “the culture” of a group
as if this is some mysterious force outside the biological
individual or independent of evolution. But culture is a
biological phenomenon. It’s a set of abilities and practices
that allow members of one generation to learn and change
and to pass the results of that learning on to the next
generation. Culture is our nature, and the ability to learn
and change is our most important and fundamental
instinct.

THE SURPRISING PROBABILITY GURUS WEARING


DIAPERS

Two new studies in the journal Cognition describe how


some brilliant decision makers expertly use probability for
profit.

But you won't meet these economic whizzes at the World


Economic Forum in Switzerland this month. Unlike the
"Davos men," these analysts require a constant supply of
breasts, bottles, shiny toys and unconditional adoration
(well, maybe not so unlike the Davos men). Although some
of them make do with bananas. The quants in question are
10-month-old babies and assorted nonhuman primates.

Ordinary grown-ups are terrible at explicit probabilistic and


statistical reasoning. For example, how likely is it that there
will be a massive flood in America this year? How about an
earthquake leading to a massive flood in California? People
illogically give the first event a lower likelihood than the
second. But even babies and apes turn out to have
remarkable implicit statistical abilities.

Stephanie Denison at the University of Waterloo in Canada


and Fei Xu at the University of California, Berkeley, showed
babies two large transparent jars full of lollipop-shaped
toys. Some of the toys had plain black tops while some
were pink with stars, glitter and blinking lights. Of course,
economic acumen doesn't necessarily imply good taste,
and most of the babies preferred pink bling to basic black.
The two jars had different proportions of black and pink
toys. For example, one jar contained 12 pink and four black
toys. The other jar had 12 pink toys too but also contained
36 black toys. The experimenter took out a toy from one jar,
apparently at random, holding it by the "pop" so that the
babies couldn't see what color it was. Then she put it in an
opaque cup on the floor. She took a toy from the second jar
in the same way and put it in another opaque cup. The
babies crawled toward one cup or the other and got the toy.
(Half the time she put the first cup in front of the first jar,
half the time she switched them around.)

What should you do in this situation if you really want pink


lollipops? The first cup is more likely to have a pink pop
inside than the second, the odds are 3-1 versus 1-3, even
though both jars have exactly the same number of pink
toys inside. It isn't a sure thing, but that is where you would
place your bets.

So did the babies. They consistently crawled to the cup that


was more likely to have a pink payoff. In a second
experiment, one jar had 16 pink and 4 black toys, while the
other had 24 pink and 96 black ones. The second jar
actually held more pink toys than the first one, but the cup
was less likely to hold a pink toy. The babies still went for
the rational choice.

In the second study, Hannes Rackoczy at the University of


Göttingen in Germany and his colleagues did a similar
experiment with a group of gorillas, bonobos, chimps and
orangutans. They used banana and carrot pieces, and the
experimenter hid the food in one or the other hand, not a
cup. But the scientists got the same results: The apes
chose the hand that was more likely to hold a banana.

So it seems that we're designed with a basic understanding


of probability. The puzzle is this: Why are grown-ups often
so stupid about probabilities when even babies and chimps
can be so smart?

This intuitive, unconscious statistical ability may be


completely separate from our conscious reasoning. But
other studies suggest that babies' unconscious
understanding of numbers may actually underpin their
ability to explicitly learn math later. We don't usually even
try to teach probability until high school. Maybe we could
exploit these intuitive abilities to teach children, and adults,
to understand probability better and to make better
decisions as a result.

2013

WHAT CHILDREN REALLY THINK ABOUT MAGIC

This Week we will counter the cold and dark with the
warmth and light of fantasy, fiction and magic—from Santa
to Scrooge, from Old Father Time and Baby New Year to the
Three Kings of Epiphany. Children will listen to tales of
dwarves and elves and magic rings in front of an old-
fashioned fire or watch them on a new-fashioned screen.

But what do children really think about magic? The


conventional wisdom is that young children can’t
discriminate between the real and the imaginary, fact and
fantasy. More recently, however, researchers like
Jacqueline Woolley at the University of Texas and Paul
Harris at Harvard have shown that even the youngest
children understand magic in surprisingly sophisticated
ways.

For instance, Dr. Woolley showed preschoolers a box of


pencils and an empty box. She got them to vividly imagine
that the empty box was full of pencils. The children
enthusiastically pretended, but they also said that if
someone wanted pencils, they should go to the real box
rather than the imagined one.

Even young children make a sort of metaphysical


distinction between two worlds. One is the current, real
world with its observable events, incontrovertible facts and
causal laws. The other is the world of pretense and
possibility, fiction and fantasy.

Children understand the difference. They know that the


beloved imaginary friend isn’t actually real and that the
terrifying monster in the closet doesn’t actually exist
(though that makes them no less beloved or scary). But
children do spend more time than we do thinking about the
world of imagination. They don’t actually confuse the
fantasy world with the real one—they just prefer to hang out
there.

Why do children spend so much time thinking about wild


possibilities? We humans are remarkably good at
imagining ways the world could be different and working
out the consequences. Philosophers call it “counterfactual”
thinking, and it’s one of our most valuable abilities.

Scientists work out what would happen if the physical


world were different, and novelists work out what would
happen if the social and psychological world were different.
Scientific hypotheses and literary fictions both consider the
consequences of small tweaks to our models of the world;
mythologies consider much larger changes. But the
fundamental psychology is the same. Young children seem
to practice this powerful way of thinking in their everyday
pretend play.

For scientists and novelists and 3-year-olds to be good at


counterfactual reasoning, though, they must be able to
preserve a bright line between imaginary possibilities and
current reality.

But, particularly as they get older, children also begin to


think that this bright line could be crossed. They recognize
the possibility of “real” magic. It is conceivable to them, as
it is to adults, that somehow the causal laws could be
suspended, or creatures from the imaginary world could be
transported to the real one. Dr. Harris did an experiment
where children imagined a monster in the box instead of
pencils. They still said that the monster wasn’t real, but
when the experimenter left the room, they moved away
from the box—just in case. Santa Claus is confusing
because he is a fiction who at least seems to leave an
observable trail of disappearing cookies and delivered
presents.

The great conceptual advance of science was to reject this


second kind of magic, the kind that bridges the real and the
imagined, whether it is embodied in religious
fundamentalism or New Age superstition. But at the same
time, like the 3-year-olds, scientists and artists are united in
their embrace of both reality and possibility, and their
capacity to discriminate between them. There is no conflict
between celebrating the magic of fiction, myth and
metaphor and celebrating science. Counterfactual thinking
is an essential part of science, and science requires and
rewards imagination as much as literature or art.

Scientists, artists and 3-year-olds are united in their


embrace of reality and possibility.

TRIAL AND ERROR IN TODDLERS AND SCIENTISTS

The Gopnik lab is rejoicing. My student Caren Walker and I


have just published a paper in the well known journal
Psychological Science. Usually when I write about
scientific papers here, they sound neat and tidy. But since
this was our own experiment, I can tell you the messy
inside story too.

First, the study—and a small IQ test for you. Suppose you


see an experimenter put two orange blocks on a machine,
and it lights up. She then puts a green one and a blue one
on the same machine, but nothing happens. Two red ones
work, a black and white combination doesn't. Now you have
to make the machine light up yourself. You can choose two
purple blocks or a yellow one and a brown one.

But this simple problem actually requires some very


abstract thinking. It's not that any particular block makes
the machine go. It's the fact that the blocks are the same
rather than different. Other animals have a very hard time
understanding this. Chimpanzees can get hundreds of
examples and still not get it, even with delicious bananas
as a reward. As a clever (or even not so clever) reader of
this newspaper, you'd surely choose the two purple blocks.

The conventional wisdom has been that young children


also can't learn this kind of abstract logical principle.
Scientists like Jean Piaget believed that young children's
thinking was concrete and superficial. And in earlier
studies, preschoolers couldn't solve this sort of
"same/different" problem.

But in those studies, researchers asked children to say


what they thought about pictures of objects. Children often
look much smarter when you watch what they do instead
of relying on what they say.

We did the experiment I just described with 18-to-24-


month-olds. And they got it right, with just two examples.
The secret was showing them real blocks on a real
machine and asking them to use the blocks to make the
machine go.
Tiny toddlers, barely walking and talking, could quickly
learn abstract relationships. And they understood
"different" as well as "same." If you reversed the examples
so that the two different blocks made the machine go, they
would choose the new, "different" pair.

The brilliant scientists of the Gopnik lab must have realized


that babies could do better than prior research suggested
and so designed this elegant experiment, right? Not
exactly. Here's what really happened: We were doing a
totally different experiment.

My student Caren wanted to see whether getting children


to explain an event made them think about it more
abstractly. We thought that a version of the "same block"
problem would be tough for 4-year-olds and having them
explain might help. We actually tried a problem a bit
simpler than the one I just described, because the
experimenter put the blocks on the machine one at a time
instead of simultaneously. The trouble was that the 4-year-
olds had no trouble at all! Caren tested 3-year-olds, then
2-year-olds and finally the babies, and they got it too.

We sent the paper to the journal. All scientists occasionally


(OK, more than occasionally) curse journal editors and
reviewers, but they contributed to the discovery too. They
insisted that we do the more difficult simultaneous version
of the task with babies and that we test "different" as well
as "same." So we went back to the lab, muttering that the
"different" task would be too hard. But we were wrong
again.

Now we are looking at another weird result. Although the


4-year-olds did well on the easier sequential task, in a study
we're still working on, they actually seem to be doing worse
than the babies on the harder simultaneous one. So there's
a new problem for us to solve.
Scientists legitimately worry about confirmation bias, our
tendency to look for evidence that fits what we already
think. But, fortunately, learning is most fun, for us and 18-
month-olds too, when the answers are most surprising.

Scientific discoveries aren't about individual geniuses


miraculously grasping the truth. Instead, they come when
we all chase the unexpected together.

GRATITUTE FOR THE COSMIC MIRACLE OF A NEWBORN


CHILD

Last week I witnessed three miracles. These miracles


happen thousands of times a day but are no less
miraculous for that. The first was the miracle of life. Amino
acids combined to make just the right proteins, which sent
out instructions to make just the right neurons, which made
just the right connections to other neurons. And that
brought a new, utterly unique, unprecedented
consciousness—a new human soul—into the world.

Georgiana, my newborn granddaughter, already looks out at


the world with wide-eyed amazement.

The second was the miracle of learning. This new


consciousness can come to encompass the whole world.
Georgiana is already tracking the movements of her toy
giraffe. She’s learning to recognize her father’s voice and
the scent of her mother’s breast. And she’s figuring out that
the cries of “She’s so sweet!” are English rather than
Japanese.

In just 20 years she may know about quarks and leptons,


black holes and red dwarves, the beginning and end of the
universe. Maybe by then she’ll know more than we do about
how a newborn mind can learn so much, so quickly and so
well. Her brain, to borrow from Emily Dickinson, is wider
than the sky, deeper than the sea.

Georgie looks most intently at the admiring faces of the


people who surround her. She is already focused on
learning what we’re like. And that leads to the most
important miracle of all: the miracle of love.

The coordination of amino acids and neurons that brought


Georgiana to life is a stunning evolutionary achievement.
But so is the coordination of human effort and ingenuity
and devotion that keeps her alive and thriving.

Like all human babies, she is so heartbreakingly fragile, so


helpless. And yet that very fragility almost instantly calls
out a miraculous impulse to take care of her. Her mom and
dad are utterly smitten, of course, not to mention her
grandmom. But it goes far beyond just maternal hormones
or shared genes.

The little hospital room is crowded with love—from two-


year-old brother Augie to 70-year-old Grandpa, from Uncle
Nick and Aunt Margo to the many in-laws and girlfriends
and boyfriends. The friends who arrive with swaddling
blankets, the neighbors who drop off a cake, the nurses
and doctors, the baby sitters and child-care teachers—all
are part of a network of care as powerful as the network of
neurons.

That love and care will let Georgiana’s magnificent human


brain, mind and consciousness grow and change, explore
and create. The amino acids and proteins miraculously
beat back chaos and create the order of life. But our ability
to care for each other and our children—our capacity for
culture—also creates miraculous new kinds of order: the
poems and pictures of art, the theories and technologies of
science.
It may seem that science depicts a cold, barren, indifferent
universe—that Georgiana is just a scrap of carbon and
water on a third-rate planet orbiting an unimpressive sun in
an obscure galaxy. And it is true that, from a cosmic
perspective, our whole species is as fragile, as evanescent,
as helpless, as tiny as she is.

But science also tells us that the entirely natural miracles


of life, learning and love are just as real as the cosmic chill.
When we look at them scientifically, they turn out to be
even more wonderful, more genuinely awesome, than we
could have imagined. Like little Georgie, their fragility just
makes them more precious.

Of course, on this memorable Thanksgiving my heart would


be overflowing with gratitude for this one special, personal,
miracle baby even if I’d never heard of amino acids,
linguistic discrimination or non-kin investment. But I’ll also
pause to give thanks for the general human miracle. And I’ll
be thankful for the effort, ingenuity and devotion of the
scientists who help us understand and celebrate it.

THE BRAIN'S CROWDSOURCING SOFTWARE

Over the past decade, popular science has been suffering


from neuromania. The enthusiasm came from studies
showing that particular areas of the brain “light up” when
you have certain thoughts and experiences. It’s mystifying
why so many people thought this explained the mind. What
have you learned when you say that someone’s visual areas
light up when they see things?

People still seem to be astonished at the very idea that the


brain is responsible for the mind—a bunch of grey goo
makes us see! It is astonishing. But scientists knew that a
century ago; the really interesting question now is how the
grey goo lets us see, think and act intelligently. New
techniques are letting scientists understand the brain as a
complex, dynamic, computational system, not just a
collection of individual bits of meat associated with
individual experiences. These new studies come much
closer to answering the “how” question.

Take a study in the journal Nature this year by Stefano Fusi


of Columbia University College of Physicians and
Surgeons, Earl K. Miller of the Massachusetts Instutute of
Technology and their colleagues. Fifty years ago David
Hubel and Torsten Weisel made a great Nobel Prize-
winning discovery. They recorded the signals from
particular neurons in cats’ brains as the animals looked at
different patterns. The neurons responded selectively to
some images rather than others. One neuron might only
respond to lines that slanted right, another only to those
slanting left.

But many neurons don’t respond in this neatly selective


way. This is especially true for the neurons in the parts of
the brain that are associated with complex cognition and
problem-solving, like the prefrontal cortex. Instead, these
cells were a mysterious mess—they respond
idiosyncratically to different complex collections of
features. What were these neurons doing?

In the new study the researchers taught monkeys to


remember and respond to one shape rather than another
while they recorded their brain activity. But instead of just
looking at one neuron at a time, they recorded the activity
of many prefrontal neurons at once. A number of them
showed weird, messy “mixed selectivity” patterns. One
neuron might respond when the monkey remembered just
one shape or only when it recognized the shape but not
when it recalled it, while a neighboring cell showed a
different pattern.
In order to analyze how the whole group of cells worked the
researchers turned to the techniques of computer
scientists who are trying to design machines that can learn.
Computers aren’t made of carbon, of course, let alone
neurons. But they have to solve some of the same
problems, like identifying and remembering patterns. The
techniques that work best for computers turn out to be
remarkably similar to the techniques that brains use.

Essentially, the researchers found the brain was using the


same general sort of technique that Google uses for its
search algorithm. You might think that the best way to rank
search results would be to pick out a few features of each
Web page like “relevance” or “trustworthiness’”—in the
same way as the neurons picked out whether an edge
slanted right or left. Instead, Google does much better by
combining all the many, messy, idiosyncratic linking
decisions of individual users.

With neurons that detect just a few features, you can


capture those features and combinations of features, but
not much more. To capture more complex patterns, the
brain does better by amalgamating and integrating
information from many different neurons with very different
response patterns. The brain crowd-sources.

Scientists have long argued that the mind is more like a


general software program than like a particular hardware
set-up. The new combination of neuroscience and
computer science doesn’t just tell us that the grey goo lets
us think, or even exactly where that grey goo is. Instead, it
tells us what programs it runs.

Scientists are getting a clearer idea of what ‘programs’ the


mind runs.

WORLD SERIES RECAP: MAY BASEBALL'S IRRATIONAL


HEART KEEP ON BEATING

The last 15 years have been baseball's Age Of


Enlightenment. The quants and nerds brought reason and
science to the dark fortress of superstition and mythology
that was Major League Baseball. The new movement was
pioneered by the brilliant Bill James (adviser to this week's
World Champion Red Sox), implemented by Billy Beane (the
fabled general manager of my own Oakland Athletics) and
immortalized in the book and movie "Moneyball."

Over this same period, psychologists have discovered


many kinds of human irrationality. Just those biases and
foibles that are exploited, in fact, by the moneyball
approach. So if human reason has changed how we think
about baseball, it might be baseball's turn to remind us of
the limits of human reason.

We overestimate the causal power of human actions. So, in


the old days, managers assumed that gutsy, active base
stealers caused more runs than they actually do, and they
discounted the more passive players who waited for walks.
Statistical analysis, uninfluenced by the human bias toward
action, led moneyballers to value base-stealing less and
walking more.

We overgeneralize from small samples, inferring causal


regularities where there is only noise. So we act as if the
outcome of a best-of-7 playoff series genuinely indicates
the relative strength of two teams that were practically
evenly matched over a 162-game regular season. The
moneyballer doesn't change his strategy in the playoffs,
and he refuses to think that playoff defeats are as
significant as regular season success.

We confuse moral and causal judgments. Jurors think a


drunken driver who is in a fatal accident is more
responsible for the crash than an equally drunken driver
whose victim recovers. The same goes for fielders; we fans
assign far more significance to a dramatically fumbled ball
than to routine catches. The moneyball approach replaces
the morally loaded statistic of "errors" with more
meaningful numbers that include positive as well as
negative outcomes.

By avoiding these mistakes, baseball quants have come


much closer to understanding the true causal structure of
baseball, and so their decisions are more effective.

But does the fact that even experts make so many


mistakes about baseball prove that human beings are
thoroughly irrational? Baseball, after all, is a human
invention. It's a great game exactly because it's so hard to
understand, and it produces such strange and compelling
interactions between regularity and randomness, causality
and chaos.

Most of the time in the natural environment, our evolved


learning procedures get the right answers, just as most of
the time our visual system lets us see the objects around
us accurately. In fact, we really only notice our visual
system on the rare occasions when it gives us the wrong
answers, in perceptual illusions, for instance. A carnival
funhouse delights us just because we can't make sense of
it.

Baseball is a causal funhouse, a game designed precisely


to confound our everyday causal reasoning. We can never
tell just how much any event on the field is the result of
skill, luck, intention or just grace. Baseball is a machine for
generating stories, and stories are about the unexpected,
the mysterious, even the miraculous.

Sheer random noise wouldn't keep us watching. But neither


would the predictable, replicable causal regularities we rely
on every day. Those are the regularities that evolution
designed us to detect. But what can even the most rational
mind do but wonder at the absurdist koan of the
obstruction call, with its dizzying mix of rules, intentions
and accidents, that ended World Series Game 3?

The truly remarkable thing about human reasoning isn't


that we were designed by evolution to get the right answers
about the world most of the time. It's that we enjoy trying to
get the right answers so profoundly that we intentionally
make it hard for ourselves. We humans, uniquely, invent art-
forms serving no purpose except to stretch the very
boundaries of rationality itself.

DRUGGED-OUT MICE OFFER INSIGHT INTO THE


GROWING BRAIN

Imagine a scientist peeking into the skulls of glow-in-the-


dark, cocaine-loving mice and watching their nerve cells
send out feelers. It may sound more like something from
cyberpunk writer William Gibson than from the journal
Nature Neuroscience. But this kind of startling experiment
promises to change how we think about the brain and
mind.

Scientific progress often involves new methods as much


as new ideas. The great methodological advance of the
past few decades was Functional Magnetic Resonance
Imaging: It lets scientists see which areas of the brain are
active when a person thinks something.

But scientific methods can also shape ideas, for good and
ill. The success of fMRI led to a misleadingly static picture
of how the brain works, particularly in the popular
imagination. When the brain lights up to show the distress
of a mother hearing her baby cry, it's tempting to say that
motherly concern is innate.

But that doesn't follow at all. A learned source of distress


can produce the same effect. Logic tells you that every
time we learn something, our brains must change, too. In
fact, that kind of change is the whole point of having a
brain in the first place. The fMRI pictures of brain areas
"lighting up" don't show those changes. But there are
remarkable new methods that do, at least for mice.

Slightly changing an animal's genes can make it produce


fluorescent proteins. Scientists can use a similar technique
to make mice with nerve cells that light up. Then they can
see how the mouse neurons grow and connect through a
transparent window in the mouse's skull.

The study that I cited from Nature Neuroscience, by Linda


Wilbrecht and her colleagues, used this technique to trace
one powerful and troubling kind of learning—learning to use
drugs. Cocaine users quickly learn to associate their high
with a particular setting, and when they find themselves
there, the pull of the drug becomes particularly irresistible.

First, the researchers injected mice with either cocaine or


(for the control group) salt water and watched what
happened to the neurons in the prefrontal part of their
brains, where decisions get made. The mice who got
cocaine developed more "dendritic spines" than the other
mice—their nerve cells sent out more potential connections
that could support learning. So cocaine, just by itself,
seems to make the brain more "plastic," more susceptible
to learning.

But a second experiment was even more interesting. Mice,


like humans, really like cocaine. The experimenters gave
the mice cocaine on one side of the cage but not the other,
and the mice learned to go to that side of the cage. The
experimenters recorded how many new neural spines were
formed and how many were still there five days later.

All the mice got the same dose of cocaine, but some of
them showed a stronger preference for the cocaine side of
the cage than others—they had learned the association
between the cage and the drug better. The mice who
learned better were much more likely to develop persistent
new spines. The changes in behavior were correlated to
changes in the brain.

It could be that some mice were more susceptible to the


effects of the cocaine, which produced more spines, which
made them learn better. Or it could be that the mice who
were better learners developed more persistent spines.

We don't know how this drug-induced learning compares to


more ordinary kinds of learning. But we do know, from
similar studies, that young mice produce and maintain
more new spines than older mice. So it may be that the
quick, persistent learning that comes with cocaine, though
destructive, is related to the profound and extensive
learning we see early in life, in both mice and men.

POVERTY CAN TRUMP A WINNING HAND OF GENES

We all notice that some people are smarter than others.


You might naturally wonder how much those differences in
intelligence are the result of genes or of upbringing. But
that question, it turns out, is impossible to answer.

That’s because changes in our environment can actually


transform the relation between our traits, our upbringing,
and our genes.

The textbook illustration of this is a dreadful disease called


PKU. Some babies have a genetic mutation which means
that they can’t process an amino acid in their food. That
leads to severe mental retardation. For centuries, PKU was
incurable. Genetics determined whether someone suffered
from the syndrome, and so had a low IQ.

But then scientists discovered how PKU worked. Now, we


can immediately put babies with the mutation on a special
diet. So, now, whether a baby with PKU has a low IQ is
determined by the food they eat—their environment.

We humans can figure out how our environment works and


act to change it, as we did with PKU. So if you’re trying to
measure the influence of human nature and nurture you
have to consider not just the current environment, but also
all the possible environments that we can create.

This doesn’t just apply to obscure diseases. In the latest


issue of Psychological Science Timothy C. Bates of the
University of Edinburgh and colleagues report a study of
the relationship between genes, SES (socio-economic
status, or how rich and educated you are) and IQ. They
used statistics to analyze the differences between identical
twins, who share all DNA, and fraternal twins, who share
only some.

When psychologists first started studying twins, they found


identical twins much more likely to have similar IQs than
fraternal ones. They concluded that IQ was highly
“heritable”—due to genetic differences. But those were all
high SES twins. Erik Turkheimer of the University of Virginia
and his colleagues discovered that the picture was very
different for poor, low SES, twins. For these children, there
was very little difference between identical and fraternal
twins: IQ was hardly heritable at all. Differences in the
environment, like whether you lucked out with a good
teacher, seemed to be much more important.

In the new study the Bates team found this was even true
when those children grew up. This might seem paradoxical
—after all, your DNA stays the same no matter how you are
raised. The explanation is that IQ is influenced by
education. Historically, absolute IQ scores have risen
substantially as we’ve changed our environment so that
more people go to school longer.

Richer children all have similarly good educational


opportunities, so that genetic differences become more
apparent. And since richer children have more educational
choice, they (or their parents) can choose environments
that accentuate and amplify their particular skills. A child
who has genetic abilities that make her just slightly better
at math may be more likely to take a math class, and so
become even better at math.

But for poor children haphazard differences in educational


opportunity swamp genetic differences. Ending up in a
really terrible school or one a bit better can make a big
difference. And poor children have fewer opportunities to
tailor their education to their particular strengths.

How much your genes shape your intelligence depends on


whether you live in a world with no schooling at all, a world
where you need good luck to get a good education, or a
world with rich educational possibilities. If we could
change the world for the PKU babies, we can change it for
the next generation of poor children, too.

IS IT POSSIBLE TO REASON ABOUT HAVING A CHILD?

How can you decide whether to have a child? It’s a complex


and profound question—a philosophical question. But it’s
not a question traditional philosophers thought about
much. In fact, the index of the 1967 “Encyclopedia of
Philosophy” had only four references to children at all—
though there were hundreds of references to angels. You
could read our deepest thinkers and conclude that humans
reproduced through asexual cloning.

Recently, though, the distinguished philosopher L.A. Paul


(who usually works on abstruse problems in the
metaphysics of causation) wrote a fascinating paper,
forthcoming in the journal Res Philosophica. Prof. Paul
argues that there is no rational way to decide to have
children—or not to have them.

How do we make a rational decision? The classic answer is


that we imagine the outcomes of different courses of
action. Then we consider both the value and the probability
of each outcome. Finally, we choose the option with the
highest “utilities,” as the economists say. Does the glow of
a baby’s smile outweigh all those sleepless nights?

It’s not just economists. You can find the same picture in
the advice columns of Vogue and Parenting. In the modern
world, we assume that we can decide whether to have
children based on what we think the experience of having a
child will be like.

But Prof. Paul thinks there’s a catch. The trouble is that,


notoriously, there is no way to really know what having a
child is like until you actually have one. You might get hints
from watching other people’s children. But that
overwhelming feeling of love for this one particular baby
just isn’t something you can understand beforehand. You
may not even like other people’s children and yet discover
that you love your own child more than anything. Of course,
you also can’t really understand the crushing responsibility
beforehand, either. So, Prof. Paul says, you just can’t make
the decision rationally.

I think the problem may be even worse. Rational decision-


making assumes there is a single person with the same
values before and after the decision. If I’m trying to decide
whether to buy peaches or pears, I can safely assume that
if I prefer peaches now, the same “I” will prefer them after
my purchase. But what if making the decision turns me into
a different person with different values?

Part of what makes having a child such a morally


transformative experience is the fact that my child’s well-
being can genuinely be more important to me than my own.
It may sound melodramatic to say that I would give my life
for my children, but, of course, that’s exactly what every
parent does all the time, in ways both large and small.

Once I commit myself to a child, I’m literally not the same


person I was before. My ego has expanded to include
another person even though—especially though—that
person is utterly helpless and unable to reciprocate.

The person I am before I have children has to make a


decision for the person I will be afterward. If I have kids,
chances are that my future self will care more about them
than just about anything else, even her own happiness, and
she’ll be unable to imagine life without them. But, of course,
if I don’t have kids, my future self will also be a different
person, with different interests and values. Deciding
whether to have children isn’t just a matter of deciding
what you want. It means deciding who you’re going to be.

L.A. Paul, by the way, is, like me, both a philosopher and a
mother—a combination that’s still surprisingly rare. There
are more and more of us, though, so maybe the 2067
Encyclopedia of Philosophy will have more to say on the
subject of children. Or maybe even philosopher-mothers
will decide it’s easier to stick to thinking about angels.

EVEN YOUNG CHILDREN ADOPT ARBITRARY RITUALS


Human beings love rituals. Of course, rituals are at the
center of religious practice. But even secularists celebrate
the great transitions of life with arbitrary actions,
formalized words and peculiar outfits. To become part of
my community of hardheaded, rational, scientific Ph.D.s., I
had to put on a weird gown and even weirder hat, walk
solemnly down the aisle of a cavernous building, and listen
to rhythmically intoned Latin.

Our mundane actions are suffused with arbitrary


conventions, too. Grabbing food with your hands is efficient
and effective, but we purposely slow ourselves down with
cutlery rituals. In fact, if you’re an American, chances are
that you cut your food with your fork in your left hand, then
transfer the fork to your right hand to eat the food, and then
swap it back again. You may not even realize that you’re
doing it. That elaborate fork and knife dance makes
absolutely no sense.

But that’s the central paradox of ritual. Rituals are


intentionally useless, purposely irrational. So why are they
so important to us?

The cognitive psychologist Christine LeGare at the


University of Texas at Austin has been trying to figure out
where rituals come from and what functions they serve.
One idea is that rituals declare that you are a member of a
particular social group.

Everybody eats, but only Americans swap their knives and


forks. (Several spy movies have used this as a plot point).
Sharing your graduation ceremony marks you as part of the
community of Ph.D.s. more effectively than the solitary act
of finishing your dissertation.

The fact that rituals don’t make practical sense is just what
makes them useful for social identification. If someone just
puts tea in a pot and adds hot water then I know only that
they are a sensible person who wants tea. If instead they
kneel on a mat and revolve a special whisk a precise
number of times, or carefully use silver tongs to drop
exactly two lumps into a china cup, I can conclude that they
are members of a particular aristocratic tea culture.

It turns out that rituals are deeply rooted and they emerge
early. Surprisingly young children are already sensitive to
the difference between purposeful actions and rituals, and
they adopt rituals themselves.

In a new paper forthcoming in the Journal Cognition,


LeGare and colleagues showed 3- to 6-year-old children a
video of people performing a complicated sequence of
eight actions with a mallet and a pegboard. Someone
would pick up the mallet, place it to one side, push up a peg
with her hand etc. Then the experimenters gave the
children the mallet and pegboard and said, “Now it’s your
turn.”

You could interpret this sequence of actions as an


intelligent attempt to bring about a particular outcome,
pushing up the pegs. Or you could interpret it as a ritual, a
way of saying who you are.

Sometimes the children saw a single person perform the


actions twice. Sometimes they saw two people perform the
actions simultaneously. The identical synchronous actions
suggested that the two people were from the same social
group.

When they saw two people do exactly the same thing at the
same time, the children produced exactly the same
sequence of actions themselves. They also explained their
actions by saying things like “I had to do it the way that
they did.” They treated the actions as if they were a ritual.
When they saw the single actor, they were much less likely
to imitate exactly what the other person did. Instead, they
treated it like a purposeful action. They would vary what
they did themselves to make the pegs pop up in a new way.

LeGare thinks that, from the time we are very young


children, we have two ways of thinking about people—a
“ritual stance” and an “instrumental stance.” We learn as
much from the irrational and arbitrary things that people
do, as from the intelligent and sensible ones.

THE GORILLA LURKING IN OUR CONSCIOUSNESS

Imagine that you are a radiologist searching through slides


of lung tissue for abnormalities. On one slide, right next to
a suspicious nodule, there is the image of a large,
threatening, gorilla. What would you do? Write to the
American Medical Association? Check yourself into the
schizophrenia clinic next door? Track down the practical
joker among the lab technicians?

In fact, you probably wouldn’t do anything. That is because,


although you were staring right at the gorilla, you probably
wouldn’t have seen it. That startling fact shows just how
little we understand about consciousness.

In the journal Psychological Science, Trafton Drew and


colleagues report that they got radiologists to look for
abnormalities in a series of slides, as they usually do. But
then they added a gorilla to some of the slides. The gorilla
gradually faded into the slides and then gradually faded
out, since people are more likely to notice a sudden change
than a gradual one. When the experimenters asked the
radiologists if they had seen anything unusual, 83% said no.
An eye-tracking machine showed that radiologists missed
the gorilla even when they were looking straight at it.
This study is just the latest to demonstrate what
psychologists call “inattentional blindness.” When we pay
careful attention to one thing, we become literally blind to
others—even startling ones like gorillas.

In one classic study, Dan Simons and Christopher Chabris


showed people a video of students passing a ball around.
They asked the viewers to count the number of passes, so
they had to pay attention to the balls. In the midst of the
video, someone in a gorilla suit walked through the players.
Most of the viewers, who were focused on counting the
balls, didn’t see the gorilla at all. You can experience similar
illusions yourself at invisiblegorilla.com. It is an amazingly
robust phenomenon—I am still completely deceived by
each new example.

You might think this is just a weird thing that happens with
videos in a psychology lab. But in the new study, the
radiologists were seasoned professionals practicing a real
and vitally important skill. Yet they were also blind to the
unexpected events.

In fact, we are all subject to inattentional blindness all the


time. That is one of the foundations of magic acts.
Psychologists have started collaborating with professional
magicians to figure out how their tricks work. It turns out
that if you just keep your audience’s attention focused on
the rabbit, they literally won’t even see what you’re doing
with the hat.

Inattentional blindness is as important for philosophers as


it is for radiologists and magicians. Many philosophers
have claimed that we can’t be wrong about our conscious
experiences. It certainly feels that way. But these studies
are troubling. If you asked the radiologist about the gorilla,
she’d say that she just experienced a normal slide in
exactly the way she experienced the other slides—except
that we know that can’t be true. Did she have the
experience of seeing the gorilla and somehow not know it?
Or did she experience just the part of the slide with the
nodule and invent the gorilla-free remainder?

At this very moment, as I stare at my screen and


concentrate on this column, I’m absolutely sure that I’m
also experiencing the whole visual field—the chair, the light,
the view out my window. But for all I know, invisible gorillas
may be all around me.

Many philosophical arguments about consciousness are


based on the apparently certain and obvious intuitions we
have about our experience. This includes, of course,
arguments that consciousness just couldn’t be explained
scientifically. But scientific experiments like this one show
that those beautifully clear and self-evident intuitions are
really incoherent and baffling. We will have to wrestle with
many other confusing, tricky, elusive gorillas before we
understand how consciousness works.

DOES EVOLUTION WANT US TO BE UNHAPPY?

Samuel Johnson called it the vanity of human wishes, and


Buddhists talk about the endless cycle of desire. Social
psychologists say we get trapped on a hedonic treadmill.
What they all mean is that we wish, plan and work for
things that we think will make us happy, but when we finally
get them, we aren’t nearly as happy as we thought we’d be.

Summer makes this particularly vivid. All through the busy


winter I longed and planned and saved for my current
vacation. I daydreamed about peaceful summer days in
this beautiful village by the Thames with nothing to do but
write. Sure enough, the first walk down the towpath was
sheer ecstasy—but by the fifth, it was just another walk.
The long English evenings hang heavy, and the damned
book I’m writing comes along no more easily than it did in
December.

This looks like yet another example of human irrationality.


But the economist Arthur Robson has an interesting
evolutionary explanation. Evolution faces what economists
call a principal-agent problem. Evolution is the principal,
trying to get organisms (its agents) to increase their
fitness. But how can it get those dumb animals to act in
accordance with this plan? (This anthropomorphic
language is just a metaphor, of course—a way of saying
that the fitter organisms are more likely to survive and
reproduce. Evolution doesn’t have intentions.)

For simple organisms like slugs, evolution can build in


exactly the right motivations (move toward food and away
from light). But it is harder with a complicated, cognitive
organism like us. We act by imagining many alternative
futures and deciding among them. Our motivational system
has to be designed so that we do this in a way that tends to
improve our fitness.

Suppose I am facing a decision between two alternative


futures. I can stay where I am or go on to the next valley
where the river is a bit purer, the meadows a bit greener
and the food a bit better. My motivational system ensures
that when I imagine the objectively better future it looks
really great, far better than all the other options—I’ll be so
happy! So I pack up and move. From evolution’s
perspective that is all to the good: My fitness has
increased.

But now suppose that I have actually already made the


decision. I am in the next valley. It does me no additional
good to continue admiring the river, savoring the green of
the meadow and the taste of the fruit. I acted, I have gotten
the benefit, and feeling happy now is, from evolution’s
perspective, just a superfluous luxury.

Wanting to be happy and imagining the happy future made


me act in a way that really did make me better off; feeling
happy now doesn’t help. To keep increasing my fitness, I
should now imagine the next potential source of happiness
that will help me to make the next decision. (Doesn’t that
tree just over the next hill have even better fruit?)

It is as if every time we make a decision that actually


makes us better off, evolution resets our happiness meter
to zero. That prods us to decide to take the next action,
which will make us even better off—but no happier.

Of course, I care about what I want, not what evolution


wants. But what do I want? Should I try to be better off
objectively even if I don’t feel any happier? After all, the
Thames really is beautiful, the meadows are green, the
food—well, it’s better in England than it used to be. And the
book really is getting done.

Or would it be better to defy evolution, step off the treadmill


of desire and ambition and just rest serenely at home in
Buddhist contentment? At least we humans can derive a bit
of happiness, however fleeting, from asking these
questions, perhaps because the answers always seem to
be just over the next hill.

HOW TO GET CHILDREN TO EAT VEGGIES

To parents, there is no force known to science as powerful


as the repulsion between children and vegetables.

Of course, just as supercooling fluids can suspend the law


of electrical resistance, melting cheese can suspend the
law of vegetable resistance. This is sometimes known as
the Pizza Paradox. There is also the Edamame Exception,
but this is generally considered to be due to the Snack
Uncertainty Principle, by which a crunchy soybean is and is
not a vegetable simultaneously. But when melty mozzarella
conditions don’t apply, the law of vegetable repulsion would
appear to be as immutable as gravity, magnetism or the
equally mysterious law of child-godawful mess attraction.

In a new paper in Psychological Science, however, Sarah


Gripshover and Ellen Markman of Stanford University have
shown that scientists can overcome the child-vegetable
repulsive principle. Remarkably, the scientists in question
are the children themselves. It turns out that, by giving
preschoolers a new theory of nutrition, you can get them to
eat more vegetables.

My colleagues and I have argued that very young children


construct intuitive theories of the world around them (my
first book was called “The Scientist in the Crib”). These
theories are coherent, causal representations of how things
or people or animals work. Just like scientific theories, they
let children make sense of the world, construct predictions
and design intelligent actions.

Preschoolers already have some of the elements of an


intuitive theory of biology. They understand that invisible
germs can make you sick and that eating helps make you
healthy, even if they don’t get all the details. One little boy
explained about a peer, “He needs more to eat because he
is growing long arms.”

The Stanford researchers got teachers to read 4- and


5-year olds a series of story books for several weeks. The
stories gave the children a more detailed but still
accessible theory of nutrition. They explained that food is
made up of different invisible parts, the equivalent of
nutrients; that when you eat, your body breaks up the food
into those parts; and that different kinds of food have
different invisible parts. They also explained that your body
needs different nutrients to do different things, so that to
function well you need to take in a lot of different nutrients.

In a control condition, the teachers read children similar


stories based on the current United States Department of
Agriculture website for healthy nutrition. These stories also
talked about healthy eating and encouraged it. But they
didn’t provide any causal framework to explain how eating
works or why you should eat better.

The researchers also asked children questions to test


whether they had acquired a deeper understanding of
nutrition. And at snack time they offered the children
vegetables as well as fruit, cheese and crackers. The
children who had heard the theoretical stories understood
the concepts better. More strikingly, they also were more
likely to pick the vegetables at snack time.

We don’t yet know if this change in eating habits will be


robust or permanent, but a number of other recent studies
suggest that changing children’s theories can actually
change their behavior too.

A quick summary of 30 years of research in developmental


psychology yields two big propositions: Children are much
smarter than we thought, and adults are much stupider.
Studies like this one suggest that the foundations of
scientific thinking—causal inference, coherent explanation,
and rational prediction—are not a creation of advanced
culture but our evolutionary birthright.

WHY ARE SOME CHILDREN MORE RESILIENT

The facts are grimly familiar: 20% of American children


grow up in poverty, a number that has increased over the
past decade. Many of those children also grow up in social
isolation or chaos. This has predictably terrible effects on
their development.

There is a moral mystery about why we allow this to


happen in one of the richest societies in history. But there
is also a scientific mystery. It's obvious why deprivation
hurts development. The mystery is why some deprived
children seem to do so much better than others. Is it
something about their individual temperament or their
particular environment?

The pediatrician Tom Boyce and the psychologist Jay


Belsky, with their colleagues, suggest an interesting,
complicated interaction between nature and nurture. They
think that some children may be temperamentally more
sensitive than others to the effects of the environment—
both good and bad.

They describe these two types of children as orchids and


dandelions. Orchids grow magnificently when conditions
are just right and wither when they aren't. Dandelions grow
about the same way in a wide range of conditions. A new
study by Elisabeth Conradt at Brown University and her
colleagues provides some support for this idea.

They studied a group of "at risk" babies when they were just
five months old. The researchers recorded their RSA
(Respiratory Sinus Arrhythmia)—that is, how their heart
rates changed when they breathed in and out. Differences
in RSA are connected to differences in temperament.
People with higher RSA—heart rates that vary more as they
breathe—seem to respond more strongly to their
environment physiologically.

Then they looked at the babies' environments. They


measured economic risk factors like poverty, medical
factors like premature birth, and social factors like little
family and community support. Most importantly, they also
looked at the relationships between the children and their
caregivers. Though all the families had problems, some
had fewer risk factors, and those babies tended to have
more stable and secure relationships. In other families,
with more risk factors, the babies had disorganized and
difficult relationships.

A year later, the researchers looked at whether the children


had developed behavior problems. For example, they
recorded how often the child hurt others, refused to eat or
had tantrums. All children do things like this sometimes,
but a child who acts this way a lot is likely to have trouble
later on.

Finally, they analyzed the relationships among the children's


early physiological temperament, their environment and
relationships, and later behavior problems. The lower-RSA
children were more like dandelions. Their risky environment
did hurt them; they had more behavior problems than the
average child in the general population, but they seemed
less sensitive to variations in their environment. Lower-RSA
children who grew up with relatively stable and secure
relationships did no better than low-RSA children with more
difficult lives.

The higher-RSA children were more like orchids. For them,


the environment made an enormous difference. High-RSA
children who grew up with more secure relationships had
far fewer behavior problems than high-RSA children who
grew up with difficult relationships. In good environments,
these orchid children actually had fewer behavior problems
than the average child. But they tended to do worse than
average in bad environments.

From a scientific perspective, the results illustrate the


complexity of interactions between nature and nurture.
From a moral and policy perspective, all these children,
dandelions and orchids both, need and deserve a better
start in life. Emotionally, there is a special poignancy about
what might have been. What could be sadder than a
withered orchid?

THE WORDSWORTHS: CHILD PSYCHOLOGISTS

Last week, I made a pilgrimage to Dove Cottage—a tiny


white house nestled among the meres and fells of
England's Lake District. William Wordsworth and his sister
Dorothy lived there while they wrote two of my favorite
books: his "Lyrical Ballads" and her journal—both
masterpieces of Romanticism.

The Romantics celebrated the sublime—an altered,


expanded, oceanic state of consciousness. Byron and
Shelley looked for it in sex. Wordsworth's friends, Coleridge
and De Quincey, tried drugs (De Quincey's opium scales sit
next to Dorothy's teacups in Dove Cottage).

But Wordsworth identified this exalted state with the very


different world of young children. His best poems describe
the "splendor in the grass," the "glory in the flower," of early
childhood experience. His great "Ode: Intimations of
Immortality From Recollections of Early Childhood" begins:
There was a time when meadow, grove, and stream, / The
earth, and every common sight, / To me did seem /
Apparell'd in celestial light, / The glory and the freshness of a
dream.

This picture of the child's mind is remarkably close to the


newest scientific picture. Children's minds and brains are
designed to be especially open to experience. They're
unencumbered by the executive planning, focused
attention and prefrontal control that fuels the mad
endeavor of adult life, the getting and spending that lays
waste our powers (and, to be fair, lets us feed our children).
This makes children vividly conscious of "every common
sight" that habit has made invisible to adults. It might be
Wordsworth's meadows or the dandelions and garbage
trucks that enchant my 1-year-old grandson.

It's often said that the Romantics invented childhood, as if


children had merely been small adults before. But
scientifically speaking, Wordsworth discovered childhood—
he saw children more clearly than others had. Where did
this insight come from? Mere recollection can't explain it.
After all, generations of poets and philosophers had
recollected early childhood and seen only confusion and
limitation.

I suspect it came at least partly from his sister Dorothy.


She was an exceptionally sensitive and intelligent observer,
and the descriptions she recorded in her journal famously
made their way into William's poems. He said that she gave
him eyes and ears. Dorothy was also what the evolutionary
anthropologist Sarah Hrdy calls an "allomother." All her life,
she devotedly looked after other people's children and
observed their development.

In fact, when William was starting to do his greatest work,


he and Dorothy were looking after a toddler together. They
rescued 4-year-old Basil Montagu from his irresponsible
father, who paid them 50 pounds a year to care for him.
The young Wordsworth earned more as a nanny than as a
poet. Dorothy wrote about Basil—"I do not think there is any
pleasure more delightful than that of marking the
development of a child's faculties." It could be the credo of
every developmental psychologist.

There's been much prurient speculation about whether


Dorothy and William slept together. But very little has been
written about the undoubted fact that they raised a child
together.
For centuries the people who knew young children best
were women. But, sexism aside, just bearing and rearing
children was such overwhelming work that it left little time
for thinking or writing about them, especially in a world
without birth control, vaccinations or running water.

Dorothy was a thinker and writer who lived intimately with


children but didn't bear the full, crushing responsibility of
motherhood. Perhaps she helped William to understand
children's minds so profoundly and describe them so
eloquently.

MORAL PUZZLES KIDS STRUGGLE WITH

Here's a question. There are two groups, Zazes and Flurps.


A Zaz hits somebody. Who do you think it was, another Zaz
or a Flurp?

It's depressing, but you have to admit that it's more likely
that the Zaz hit the Flurp. That's an understandable
reaction for an experienced, world-weary reader of The Wall
Street Journal. But here's something even more depressing
—4-year-olds give the same answer.

In a 2012 study, 4-year-olds predicted that people would be


more likely to harm someone from another group than from
their own group.

In my last column, I talked about some disturbing new


research showing that preschoolers are already
unconsciously biased against other racial groups. Where
does this bias come from?

Marjorie Rhodes at New York University argues that


children are "intuitive sociologists" trying to make sense of
the social world. We already know that very young children
make up theories about everyday physics, psychology and
biology. Dr. Rhodes thinks that they have theories about
social groups, too.

In 2012 she asked young children about the Zazes and


Flurps. Even 4-year-olds predicted that people would be
more likely to harm someone from another group than from
their own group. So children aren't just biased against other
racial groups: They also assume that everybody else will be
biased against other groups. And this extends beyond race,
gender and religion to the arbitrary realm of Zazes and
Flurps.

In fact, a new study in Psychological Science by Dr. Rhodes


and Lisa Chalik suggests that this intuitive social theory
may even influence how children develop moral
distinctions.

Back in the 1980s, Judith Smetana and colleagues


discovered that very young kids could discriminate
between genuinely moral principles and mere social
conventions. First, the researchers asked about everyday
rules—a rule that you can't be mean to other children, for
instance, or that you have to hang up your clothes. The
children said that, of course, breaking the rules was wrong.
But then the researchers asked another question: What
would you think if teachers and parents changed the rules
to say that being mean and dropping clothes were OK?

Children as young as 2 said that, in that case, it would be


OK to drop your clothes, but not to be mean. No matter
what the authorities decreed, hurting others, even just
hurting their feelings, was always wrong. It's a strikingly
robust result—true for children from Brazil to Korea.
Poignantly, even abused children thought that hurting other
people was intrinsically wrong.

This might leave you feeling more cheerful about human


nature. But in the new study, Dr. Rhodes asked similar
moral questions about the Zazes and Flurps. The 4-year-
olds said it would always be wrong for Zazes to hurt the
feelings of others in their group. But if teachers decided
that Zazes could hurt Flurps' feelings, then it would be OK
to do so. Intrinsic moral obligations only extended to
members of their own group.

The 4-year-olds demonstrate the deep roots of an ethical


tension that has divided philosophers for centuries. We feel
that our moral principles should be universal, but we
simultaneously feel that there is something special about
our obligations to our own group, whether it's a family, clan
or country.

"You've got to be taught before it's too late / Before you are
6 or 7 or 8 / To hate all the people your relatives hate,"
wrote Oscar Hammerstein. Actually, though, it seems that
you don't have to be taught to prefer your own group—you
can pick that up fine by yourself. But we do have to teach
our children how to widen the moral circle, and to extend
their natural compassion and care even to the Flurps.

The facts are grimly familiar: 20% of American children


grow up in poverty, a number that has increased over the
past decade. Many of those children also grow up in social
isolation or chaos. This has predictably terrible effects on
their development.

There is a moral mystery about why we allow this to


happen in one of the richest societies in history. But there
is also a scientific mystery. It's obvious why deprivation
hurts development. The mystery is why some deprived
children seem to do so much better than others. Is it
something about their individual temperament or their
particular environment?

IMPLICIT RACIAL BIAS IN PRESCHOOLERS


Are human beings born good and corrupted by society or
born bad and redeemed by civilization? Lately, goodness
has been on a roll, scientifically speaking. It turns out that
even 1-year-olds already sympathize with the distress of
others and go out of their way to help them.

But the most recent work suggests that the origins of evil
may be only a little later than the origins of good.

New studies show that even young children discriminate.

Our impulse to love and help the members of our own


group is matched by an impulse to hate and fear the
members of other groups. In "Gulliver's Travels," Swift
described a vicious conflict between the Big-Enders, who
ate their eggs with the big end up, and the Little-Enders,
who started from the little end. Historically, largely arbitrary
group differences (Catholic vs. Protestant, Hutu vs. Tutsi)
have led to persecution and even genocide.

When and why does this particular human evil arise? A raft
of new studies shows that even 5-year-olds discriminate
between what psychologists call in-groups and out-groups.
Moreover, children actually seem to learn subtle aspects of
discrimination in early childhood.

In a recent paper, Yarrow Dunham at Princeton and


colleagues explored when children begin to have negative
thoughts about other racial groups. White kids aged 3 to 12
and adults saw computer-generated, racially ambiguous
faces. They had to say whether they thought the face was
black or white. Half the faces looked angry, half happy. The
adults were more likely to say that angry faces were black.
Even people who would hotly deny any racial prejudice
unconsciously associate other racial groups with anger.

But what about the innocent kids? Even 3- and 4-year-olds


were more likely to say that angry faces were black. In fact,
younger children were just as prejudiced as older children
and adults.

Is this just something about white attitudes toward black


people? They did the same experiment with white and
Asian faces. Although Asians aren't stereotypically angry,
children also associated Asian faces with anger. Then the
researchers tested Asian children in Taiwan with exactly
the same white and Asian faces. The Asian children were
more likely to think that angry faces were white. They also
associated the out-group with anger, but for them the out-
group was white.

Was this discrimination the result of some universal, innate


tendency or were preschoolers subtly learning about
discrimination? For black children, white people are the out-
group. But, surprisingly, black children (and adults) were
the only ones to show no bias at all; they categorized the
white and black faces in the same way. The researchers
suggest that this may be because black children pick up
conflicting signals—they know that they belong to the black
group, but they also know that the white group has higher
status.

These findings show the deep roots of group conflict. But


the last study also suggests that somehow children also
quickly learn about how groups are related to each other.

Learning also was important in another way. The


researchers began by asking the children to categorize
unambiguously white, black or Asian faces. Children began
to differentiate the racial groups at around age 4, but many
of the children still did not recognize the racial categories.
Moreover, children made the white/Asian distinction at a
later age than the black/white distinction. Only children
who recognized the racial categories were biased, but they
were as biased as the adults tested at the same time. Still,
it took kids from all races a while to learn those categories.

The studies of early altruism show that the natural state of


man is not a war of all against all, as Thomas Hobbes said.
But it may quickly become a war of us against them.

HOW THE BRAIN REALLY WORKS

For the last 20 years neuroscientists have shown us


compelling pictures of brain areas "lighting up" when we
see or hear, love or hate, plan or act. These studies were an
important first step. But they also suggested a
misleadingly simple view of how the brain works. They
associated specific mental abilities with specific brain
areas, in much the same way that phrenology, in the 19th
century, claimed to associate psychological characteristics
with skull shapes.

Most people really want to understand the mind, not the


brain. Why do we experience and act on the world as we
do? Associating a piece of the mind with a piece of the
brain does very little to answer that question. After all, for
more than a century we have known that our minds are the
result of the stuff between our necks and the tops of our
heads. Just adding that vision is the result of stuff at the
back and that planning is the result of stuff in the front, it
doesn't help us understand how vision or planning work.

But new techniques are letting researchers look at the


activity of the whole brain at once. What emerges is very
different from the phrenological view. In fact, most brain
areas multitask; they are involved in many different kinds of
experiences and actions. And the brain is dynamic. It can
respond differently to the same events in different times
and circumstances.
A new study in Nature Neuroscience by Jack L. Gallant,
Tolga Çukur and colleagues at the University of California,
Berkeley, dramatically illustrates this new view. People in an
fMRI scanner watched a half-hour-long sequence
combining very short video clips of everyday scenes. The
scientists organized the video content into hundreds of
categories, describing whether each segment included a
plant or a building, a cat or a clock.

Then they divided the whole brain into small sections with
a three-dimensional grid and recorded the activity in each
section of the grid for each second. They used
sophisticated statistical analyses to find the relationship
between the patterns of brain activity and the content of
the videos.

The twist was that the participants either looked for human
beings in the videos or looked for vehicles. When they
looked for humans, great swaths of the brain became a
"human detector"—more sensitive to humans and less
sensitive to vehicles. Looking for vehicles turned more of
the brain into a "vehicle detector." And when people looked
for humans their brains also became more sensitive to
related objects, like cats and plants. When they looked for
vehicles, their brains became more sensitive to clocks and
buildings as well.

In fact, the response patterns of most brain areas changed


when people changed the focus of their attention.
Something as ineffable as where you focus your attention
can make your whole brain work differently.

People often assume that knowing about the brain is all


that you need to explain how the mind works, so that
neuroscience will replace psychology. That may account
for the curious popular enthusiasm for the phrenological
"lighting up" studies. It is as if the very thought that
something psychological is "in the brain" gives us a little
explanatory frisson, even though we have known for at
least a century that everything psychological is "in the
brain" in some sense. But it would be just as accurate to
say that knowing about the mind explains how the brain
works.

The new, more dynamic picture of the brain makes


psychology even more crucial. The researchers could only
explain the very complex pattern of brain activity by relating
it to what they knew about categorization and attention. In
the same way, knowing the activity of every wire on every
chip in my computer wouldn't tell me much if I didn't also
know the program my machine was running.

Neuroscience may be sexier than psychology right now,


and it certainly has a lot more money and celebrity. But
they really cannot get along without each other.

NATURE, CULTURE AND GAY MARRIAGE

There's been a lot of talk about nature in the gay-marriage


debate. Opponents point to the "natural" link between
heterosexual sex and procreation. Supporters note nature's
staggering diversity of sexual behavior and the ubiquity of
homosexual sex in our close primate relatives. But, actually,
gay marriage exemplifies a much more profound part of
human nature: our capacity for cultural evolution.

The birds and the bees may be enough for the birds and the
bees, but for us it's just the beginning.

Culture is our nature; the evolution of culture was one


secret of our biological success. Evolutionary theorists like
the philosopher Kim Sterelny, the biologist Kevin Laland
and the psychologist Michael Tomasello emphasize our
distinctively human ability to transmit new information and
social practices from generation to generation. Other
animals have more elements of culture than we once
thought, but humans rely on cultural transmission far more
than any other species

Still, there's a tension built into cultural evolution. If the new


generation just slavishly copies the previous one this
process of innovation will seize up. The advantage of the
"cultural ratchet" is that we can use the discoveries of the
previous generation as a jumping-off point for revisions
and discoveries of our own.

Man may not be The Rational Animal, but we are The


Empirical Animal—perpetually revising what we do in the
light of our experience.

Studies show that children have a distinctively human


tendency to precisely imitate what other people do. But
they also can choose when to imitate exactly, when to
modify what they've seen, and when to try something brand
new.

Human adolescence, with its risk-taking and exploration,


seems to be a particularly important locus of cultural
innovation. Archaeologists think teenagers may have been
the first cave-painters. We can even see this generational
effect in other primates. Some macaque monkeys
famously learned how to wash sweet potatoes and passed
this skill to others. The innovator was the equivalent of a
preteen girl, and other young macaques were the early
adopters.

As in biological evolution, there is no guarantee that


cultural evolution will always move forward, or that any
particular cultural tradition or innovation will prove to be
worth preserving. But although the arc of cultural evolution
is long and irregular, overall it does seem to bend toward
justice, or, at least, to human thriving.

Gay marriage demonstrates this dynamic of tradition and


innovation in action. Marriage has itself evolved. It was
once an institution that emphasized property and
inheritance. It has become one that provides a way of both
expressing and reinforcing values of commitment, loyalty
and stability. When gay couples want marriage, rather than
just civil unions, its precisely because they endorse those
values and want to be part of that tradition.

At the same time, as more and more people have


courageously come out, there have been more and more
gay relationships to experience. That experience has led
most of the millennial generation to conclude that the link
between marital tradition and exclusive heterosexuality is
unnecessary, indeed wrong. The generational shift at the
heart of cultural evolution is especially plain. Again and
again, parents report that they're being educated by their
children.

It's ironic that the objections to gay marriage center on


child-rearing. Our long protected human childhood, and the
nurturing and investment that goes with it, is, in fact,
exactly what allows social learning and cultural evolution.
Nurture, like culture, is also our nature. We nurture our
children so that they can learn from our experience, but
also so that subsequent generations can learn from theirs.

Marriage and family are institutions designed, at least in


part, to help create an autonomous new generation, free to
try to make better, more satisfying kinds of marriage and
family for the generations that follow.

PREFRONTAL CONTROL AND INNOVATION

Quick—what can you do with Kleenex? Easy, blow your


nose. But what can you do with Kleenex that no one has
ever done before? That's not so easy. Finally a bright idea
pops up out of the blue—you could draw a face on it, put a
string around the top and make it into a cute little
Halloween ghost!

Why is thinking outside of the Kleenex box so hard? A study


published in February suggests that our much-lauded
prefrontal brain mechanisms for control and focus may
actually make it more difficult to think innovatively.

The comedian Emo Philips said that he thought his brain


was the most fascinating organ in his body—until he
realized who was telling him this. Perhaps for similar
reasons, the control system of the brain, which includes
areas like the left lateral prefrontal cortex, gets particularly
good press. It's like the brain's chief executive officer,
responsible for long-term planning, focusing, monitoring
and distraction-squelching (and apparently PR too). But
there may be a down side to those "executive functions."
Shutting down prefrontal control may actually help people
get to unusual ideas like the Kleenex ghost.

Earlier studies used fMRI imaging to see which parts of the


brain are active when we generate ideas. In 2008 Charles
Limb at Johns Hopkins University and Alan Braun at the
National Institutes of Health reported how they got jazz
musicians to either play from a memorized score or
improvise, and looked at their brains. Some "control" parts
of the prefrontal cortex shut down, deactivated, during
improvisation but not when the musicians played a
memorized score. Dr. Braun and colleagues later found the
same effect with freestyle rappers—improvisational genius
is not limited by baby-boomer taste.

But it's important to remember that correlation is not


causation. How could you prove that the frontal
deactivation really did make the improvisers innovate?
You'd need to show that if you deactivate those brain areas
experimentally people will think more innovatively. Sharon
Thompson-Schill at the University of Pennsylvania and
colleagues did that in the new study.

They used a technique called transcranial direct current


stimulation, or tDCS. If you pass a weak electrical current
through part of the brain, it temporarily and safely disrupts
neural activity. The researchers got volunteers to think up
either ordinary or unusual uses for everyday objects like
Kleenex. While the participants were doing this task, the
scientists either disrupted their left prefrontal cortex with
tDCS or used a sham control procedure. In the control, the
researchers placed the electrodes in just the same way but
surreptitiously turned off the juice before the task started.

Both groups were equally good at thinking up ordinary uses


for the objects. But the volunteers who got zapped
generated significantly more unusual uses than the
unzapped control-group thinkers, and they produced those
unusual uses much faster.

Portable frontal lobe zappers are still (thankfully)


infeasible. But we can modify our own brain functions by
thinking differently—improvising, freestyling, daydreaming
or some types of meditation. I like hanging out with 3-year-
olds. Preschool brains haven't yet fully developed the
prefrontal system, and young kids' free-spirited thinking
can be contagious.

There's a catch, though. It isn't quite right to say that losing


control makes you more creative. Centuries before
neuroscience, the philosopher John Locke distinguished
two human faculties, wit and judgment. Wit allows you to
think up wild new ideas, but judgment tells you which ideas
are actually worth keeping. Other neuroscience studies
have found that the prefrontal system re-engages when you
have to decide whether an unlikely answer is actually the
right one.

Yes—you could turn that Kleenex into an adorable little


Halloween ghost. But would that be the aesthetically
responsible thing to do? Our prefrontal control systems are
the sensible parents of our inner 3-year-olds. They keep us
from folly, even at the cost of reining in our wit.

SLEEPING AND LEARNING LIKE A BABY

Babies and children sleep a lot—12 hours a day or so to our


eight. But why would children spend half their lives in a
state of blind, deaf paralysis punctuated by insane
hallucinations? Why, in fact, do all higher animals surrender
their hard-won survival abilities for part of each day?

Children themselves can be baffled and indignant about


the way that sleep robs them of consciousness. We weary
grown-ups may welcome a little oblivion, but at nap time,
toddlers will rage and rage against the dying of the light.

Part of the answer is that sleep helps us to learn. It may


just be too hard for a brain to take in the flood of new
experiences and make sense of them at the same time.
Instead, our brains look at the world for a while and then
shut out new input and sort through what they have seen.

Children learn in a particularly profound way. Some


remarkable experiments show that even tiny babies can
take in a complex statistical pattern of data and figure out
the rules and principles that explain the pattern. Sleep
seems to play an especially important role in this kind of
learning.

In 2006, Rebecca Gómez and her colleagues at the


University of Arizona taught 15-month-old babies a made-
up language. The babies listened to 240 "sentences" made
of nonsense words, like "Pel hiftam jic" or "Pel lago jic." Like
real sentences, these sentences followed rules. If "pel" was
the first word, for instance, "jic" would always be the third
one.

Half the babies heard the sentences just before they had a
nap, and the other half heard them just after they woke up,
and they then stayed awake.

Four hours later, the experimenters tested whether the


babies had learned the "first and third" rule by seeing how
long the babies listened to brand-new sentences. Some of
the new sentences followed exactly the same rule as the
sentences that the babies had heard earlier. Some also
followed a "first and third" rule that used different nonsense
words.

Remarkably, the babies who had stayed awake had learned


the specific rules behind the sentences they heard four
hours before—like the rule about "pel" and "jic." Even more
remarkably, the babies who had slept after the instruction
seemed to learn the more abstract principle that the first
and third words were important, no matter what those
words actually were.

Just this month, a paper by Ines Wilhelm at the University


of Tübingen and colleagues showed that older children
also learn in their sleep. In fact, they learn better than
grown-ups. They showed 8-to-11-year-olds and adults a
grid of eight lights that lit up over and over in a particular
sequence. Half the participants saw the lights before
bedtime, half saw them in the morning. After 10 to 12
hours, the experimenters asked the participants to describe
the sequence. The children and adults who had stayed
awake got about half the transitions right, and the adults
who had slept were only a little better. But the children who
had slept were almost perfect—they learned substantially
better than either group of adults.

There was another twist. While the participants slept, they


wore an electronic cap to measure brain activity. The
children had much more "slow-wave sleep" than the adults
—that's an especially deep, dreamless kind of sleep. And
both children and adults who had more slow-wave sleep
learned better.

Children may sleep so much because they have so much to


learn (though toddlers may find that scant consolation for
the dreaded bedtime). It's paradoxical to try to get children
to learn by making them wake up early to get to school and
then stay up late to finish their homework.

Colin Powell reportedly said that on the eve of the Iraq war
he was sleeping like a baby—he woke up every two hours
screaming. But really sleeping like a baby might make us all
smarter.

HELPLESS BABIES AND SMART GROWN-UPS

Why are children so, well, so helpless? Why did I spend a


recent Sunday morning putting blueberry pancake bits on
my 1-year-old grandson's fork and then picking them up
again off the floor? And why are toddlers most helpless
when they're trying to be helpful? Augie's vigorous efforts
to sweep up the pancake detritus with a much-too-large
broom ("I clean!") were adorable but not exactly effective.

This isn't just a caregiver's cri de coeur—it's also an


important scientific question. Human babies and young
children are an evolutionary paradox. Why must big
animals invest so much time and energy just keeping the
little ones alive? This is especially true of our human young,
helpless and needy for far longer than the young of other
primate

One idea is that our distinctive long childhood helps to


develop our equally distinctive intelligence. We have both a
much longer childhood and a much larger brain than other
primates. Restless humans have to learn about more
different physical environments than stay-at-home chimps,
and with our propensity for culture, we constantly create
new social environments. Childhood gives us a protected
time to master new physical and social tools, from a whisk
broom to a winning comment, before we have to use them
to survive.

The usual museum diorama of our evolutionary origins


features brave hunters pursuing a rearing mammoth. But a
Pleistocene version of the scene in my kitchen, with ground
cassava roots instead of pancakes, might be more
accurate, if less exciting.

Of course, many scientists are justifiably skeptical about


such "just-so stories" in evolutionary psychology. The idea
that our useless babies are really useful learners is
appealing, but what kind of evidence could support (or
refute) it? There's still controversy, but two recent studies at
least show how we might go about proving the idea
empirically.

One of the problems with much evolutionary psychology is


that it just concentrates on humans, or sometimes on
humans and chimps. To really make an evolutionary
argument, you need to study a much wider variety of
animals. Is it just a coincidence that we humans have both
needy children and big brains? Or will we find the same
evolutionary pattern in animals who are very different from
us? In 2010, Vera Weisbecker of Cambridge University and
a colleague found a correlation between brain size and
dependence across 52 different species of marsupials,
from familiar ones like kangaroos and opossums to more
exotic ones like quokkas.

Quokkas are about the same size as Virginia opossums,


but baby quokkas nurse for three times as long, their
parents invest more in each baby, and their brains are twice
as big.

But do animals actually use their big brains and long


childhoods to learn? In 2011, Jenny Holzhaider of the
University of Auckland, New Zealand, and her colleagues
looked at an even more distantly related species, New
Caledonian crows. These brilliant big-brained birds make
sophisticated insect-digging tools from palm leaves—and
are fledglings for much longer than not-so-bright birds like
chickens.

At first, the baby crows are about as good at digging as my


Augie is at sweeping—they hold the leaves by the wrong
end and trim them into the wrong shape. But the parents
tolerate this blundering and keep the young crows full of
bugs (rather than blueberries) until they eventually learn to
master the leaves themselves.

Studying the development of quokkas and crows is one


way to go beyond just-so stories in trying to understand
how we got to be human. Our useless, needy offspring may
be at least one secret of our success. The unglamorous
work of caregiving may give human beings the chance to
figure out just how those darned brooms work.

* may require subscription to read

Das könnte Ihnen auch gefallen