Sie sind auf Seite 1von 33

SOCIOLOGY

Major Sociological Paradigms

What’s up with society, exactly? I mean, is it a smoothly functioning whole, with different parts that fit together to keep
it ticking? Or is it a jumble of different, competing groups, constantly at each other’s throats, struggling for control? Or
maybe it's, you know, a bunch of people who are just trying to get through their days. The fact is, there isn't one answer
to the question of what the nature of society really is. But all three of the models that I just described – society as a well-
oiled machine, as a group of competing interests, and as bunch of people just interacting with each other –they’re all
worth considering. Because they each offer their own perspectives on the social world, and they’re each crucial to
understanding the practice of sociology, with histories that can be traced back to a founding figure in the discipline.
So, let’s talk about paradigms.

A paradigm is not some kind of high-tech parachute. And it doesn’t equal twenty cents. Instead, a paradigm is basically a
model for how you think about things – a set of concepts and theories that frames your perspective on a certain topic,
whether it’s Russian literature or public art or the laws of physics. And in sociology, theoretical paradigms are key.
These paradigms are the fundamental assumptions that sociologists have about the social world, the ones that guide
their thinking and research. And that might sound kind of prejudicial at first, like you’re going into the study of society
with certain biases in mind. But you need the assumptions that these paradigms provide, because raw facts don't
interpret themselves.

Raw facts are things like "the unemployment rate last year was 5%," or "Sam is six feet tall," or "today a group of people
with signs blocked the highway." By raw I mean that these facts are just simple descriptions of empirical reality. And
they don’t come pre-interpreted. Is 5% an acceptable unemployment rate? Or should we be trying to lower it? Is six feet
tall actually tall? And are protesters who are blocking a highway disrupting the order of society, or are they struggling for
their interests? The answer to that last one is, of course, both. But the important thing to understand is that either
answer requires you to make some assumptions about the social world.

The other important thing is that those two different answers will be useful in different situations, for answering
different kinds of questions. For instance, if you're trying to understand how and why society can hold together at all,
then looking at protests as signs of strain or disruption might be more useful. But if you're trying to understand why
people protest, then trying to understand how they're pursuing their interests might be better. Now, all this might
sound kind of unscientific: Physics doesn't need "interpretation" exactly. Math doesn't need multiple "perspectives." But
actually, they do.
All scientific disciplines make assumptions about the world, and all scientific disciplines use different perspectives,
depending on the questions they’re asking. In physics, you can understand a bouncing ball as a nearly uncountable
multitude of fundamental particles, each with its own wave function, and all held together by different kinds of forces at
the quantum scale. Or you can just understand it as simply X number of grams of rubber moving through space.

The perspective you take will dramatically change what kinds of questions you want to ask. All sciences ask different
kinds of questions and have different assumptions for answering them. And raw facts always need some kind of
perspective in order to make them useful. Now, if we want to talk about different kinds of questions and perspectives in
sociology, a good place to start is with something we brought up last episode: the fact that sociology looks at society at
all levels, at all scales, from the huge to the tiny. In other words, sociology is concerned with both the macro and the
micro. An orientation towards the macro means looking at the big. When sociologists ask questions at this level, they're
taking a broad focus, looking at the large-scale structures that shape society.

Macro questions are things like "What caused the transition from feudalism to capitalism?" or "How does race impact
educational achievement?" An orientation toward the micro, of course, means looking at the small. These questions are
concerned more narrowly with interactions between individuals, asking things like: "Do doctors talk to patients of
different races differently?" or "How do the members of a certain group build a group identity?" Now, it's worth noting
that these orientations aren't completely separate. Because, again, the big and the small are always connected. Asking
how doctors talk to patients of different races is a micro question, but it also helps us begin to understand the macro-
level pattern of racial disparities in healthcare. Likewise, asking about how a group builds its identity could have macro
impacts, because it could help explain how large social structures are reproduced and maintained. Now that we
understand a little more about the different scales that sociology works on, we can turn to its main theoretical
paradigms, of which there are three:
There’s structural functionalism, conflict theory, and symbolic interactionism.

Let’s start with structural functionalism, which originated with a French sociologist named Emile Durkheim. Durkheim
imagined society as a kind of organism, with different parts that all worked together to keep it alive and in good health.
Of course, things could go wrong. But this was always imagined by Durkheim as a malfunction, an illness, or a deviation
from the normal functioning of things. So the structural functionalist perspective makes this same basic assumption:
Society is seen as a complex system whose parts work together to promote stability and social order. And these
different “parts" of society are social structures, relatively stable patterns of social behavior. For example, Durkheim was
extremely interested in religion, and also in the division of labor, or how tasks in a society are divided up. And these
structures are seen as fulfilling certain social functions. For instance, the family, in most societies, fulfills the function of
socializing children – teaching them how to live in that society. And social functions come in two types: manifest and
latent functions.
Manifest functions are intended or obvious consequences of a particular structure, while latent functions are
unintended or unrecognized. For example, we often think of the purpose of schools as providing children with
knowledge – that’s their manifest function. But, schools can also help socialize children. They can have – and historically
have had – the additional purpose of creating workers who listen to authority and hit deadlines.
That’s a latent function.

Now, along with functions, we also have social dysfunction, which is any social pattern that disrupts the smooth
operation of society. Technological development is a powerful driver of economic improvement, for example, which is a
useful function. But it’s also a destabilizing force. New machines can put people out of work.
Someday soon, we may see the social dysfunction of thousands of long distance truckers being displaced by self-driving
vehicles. And this brings us to one of the problems with structural functionalism. Since it sees society as fundamentally
functional and stable, it can be really bad at dealing with change. It can be bad at providing good explanations for why
change happens, and it can also interpret bad things in society as having positive functions, which should therefore not
be changed. To take an extreme example, a structural functionalist view might imagine that poverty, although harmful
to people, is functional for society, because it ensures there are always people who want work. So this view might see
any attempts at alleviating poverty as being potentially damaging to society. It’s in areas like this, however, where
conflict theories shine. In contrast to structural functionalism, conflict theories imagine society as being composed of
different groups that struggle over scarce resources – like power, money, land, food, or status. This view takes change as
being fundamental to society, constantly driven by these conflicts. The first conflict theory in sociology was the theory of
class conflict, advanced by Karl Marx. This theory imagines society as having different classes based on their
relationships to the means of production – things like factories and raw materials.

Under capitalism, the two classes were the capitalists, or bourgeoisie, who own the means of production, and the
workers, or proletariat, who must sell their labor to survive. Marx saw this conflict between classes as the central
conflict in society and the source of social inequality in power and wealth. But there are other conflict theories that
focus on different kinds of groups.
Race-Conflict theory, for example, was first stated sociologically by W.E.B. DuBois, another founder of sociology. It
understands social inequality as the result of conflict between different racial and ethnic groups. Gender-Conflict theory,
meanwhile, focuses on the social inequalities between women and men. The perspectives of all three kinds of conflict
theory have been crucially important in American history and are still important today. But the paradigms we've looked
at so far are essentially macro approaches: Structural functionalism focuses on how large structures fit together, and
conflict theory looks at how society defines sources of inequality and conflict. But then there’s symbolic interactionism,
and it’s built to deal with micro questions. Symbolic interactionism first appeared most clearly in the work of German
sociologist Max Weber and his focus on Verstehen, or "understanding." Weber believed that sociology needed to focus
on people’s individual social situations and the meaning that they attached to them.

So, because it’s more micro-focused, symbolic interactionism understands society as the product of everyday social
interactions. Specifically, this school of thought is interested in understanding the shared reality that people create
through their interactions. It might seem weird to say that reality can be created, but think back to the idea of raw facts
versus interpretation. Waving my my hand back and forth is a raw fact, but it only means that I'm waving hello to you
because we’ve agreed to give it that meaning.
For symbolic interactionism, then, there is no big-T truth. Instead, it looks at the world that we create when we assign
meaning to interactions and objects. A handshake is only a greeting because we agree that it is. A dog can be a friend or
food, depending on what meaning we've given it. Obviously, these three different paradigms provide radically different
ways of looking at the social world. But, this is because they all grasp at different parts of it.
They each give us a different lens through we can see our social lives, just like science sometimes needs a microscope
and sometimes needs a telescope. All of these lenses are important and, yes, necessary for the investigation of
sociological questions.
Karl Marx and Conflict Theory

You’ve probably heard of Karl Marx. He's remembered as the father of divisive political movements, and his name is
sometimes still thrown around in American politics as a kind of slur. But I don't want to talk about that. I want to talk
about Marx the philosopher. Marx the scholar. In the 19th century, a time defined by radical inequality and rapid
technological and political change in Europe, Marx was concerned with one question: What does it mean to be free?
Starting from this question, Marx developed an entire theory of history. And in doing so, he laid the foundation for the
paradigm of conflict theory in sociology, ultimately pushing the discipline to look at questions of power, inequality, and
how these things can drive societal change.
If Durkheim was concerned with social solidarity, with how society hangs together, Marx was concerned with freedom.
The question that Marx asked was "how can people be free?" Because humans aren’t just naturally free. When you think
about it, we're actually incredibly constrained. Our physical bodies have all kinds of needs we have to meet in order to
survive, and they’re needs that we're not really adapted to meet. Like, if you take a hummingbird and put it in the
middle of a forest somewhere, it'll just go on about its day, collecting nectar and living its life. But if you drop a person in
the middle of the woods, they’ll probably starve. Compared to other animals, Marx thought, we're incredibly poorly
adapted to the natural world. In fact, the only way for us to survive in nature is to change it, working together to remake
it to fit our needs. This is labor, he said, and we must labor cooperatively in order to survive. As we labor, we change the
world around us, and gradually free ourselves from our natural constraints. But what Marx saw was that just as we freed
ourselves from these natural constraints, we entangled ourselves in new social constraints.

Let's go to the Thought Bubble to explore this some more. Think about it like this.
Ten thousand years ago, basically everybody spent all day trying to get food. In this "primitive communism," as Marx
called it, people were strongly bound by natural constraints, but socially very equal. Now compare that to the Middle
Ages when, under feudalism, you have an entire class of people, the nobility, who never spent any time worrying about
where their next meal would come from. But you also have the peasantry, who still worked constantly, making food.
In fact, they spent a lot of their time making food for the nobility. People were producing more than they needed to
survive, but instead of that surplus being equally distributed, society was set up so that some people simply didn't need
to labor at all, while others had to work harder. That's not a natural constraint anymore, that's a social one.
Working together allowed us to transcend our natural constraints, Marx argued, but the way labor is organized leads to
massive inequalities. Thanks Thought Bubble.

So, central to the question of freedom for Marx is the question of labor, how it's organized and who it benefits, and how
this organization changes over time. This focus on labor gave rise to the perspective created by Marx and his longtime
collaborator Friedrich Engels – a perspective known as historical materialism. Historical materialism is historical because
it looks at change over time, and it's materialism because it is concerned with these questions of material reality – that
is, how production is organized, and who has things like food, or money, and who doesn't. Now, it's not that Marx didn't
care about other things, like politics or religion. But he felt that they were secondary to the production and control of
resources.
And I don't mean secondary as in less important; I mean secondary because he thought that if you wanted to
understand those things, you had to understand the material reality they were based on first. In this view, the economy
– that is, the organization of labor and resources in a society
– was the foundation, and everything else – politics, culture, religion, even families
– was what Marx called the superstructure, which was built on top of material reality.

So when Marx studied history, he didn't focus on wars and power struggles between states. Instead, he saw historical
development in terms of modes of production and economic classes. Now, “modes of production” might sound like
they’re about how stuff is made, but Marx understood them as stages of history. Primitive communism, feudalism, and
capitalism are all modes of production. And modes of production are all defined by a combination of forces of
production and relations of production. Forces of production are basically the technical, scientific, and material parts of
the economy – tools, buildings, material resources, technology, and the human labor that makes them go. In modern
capitalism, the forces of production include things like factories, oil, and the internal combustion engine. But they also
include cultural or social technologies, like the idea of the assembly line and mass production. The relations of
production, meanwhile, define how people organize themselves around labor. Do people work for wages, or does
everyone produce and sell their own goods? How does ownership or property work? Is trade a central part of the
economy? These are all questions about the relations of production. And these questions are important because, if you
think in terms of social constraints and surplus, the relations of production specify how the surplus is taken from the
people who produce it, and who gets to decide how the surplus is used. And, in capitalism, these relations aren’t all that
clear-cut.

For one thing, we don't have legally defined classes. In feudalism, being a lord or a peasant was a legal matter. If a
peasant didn’t work, their lord could legally punish them. But under capitalism there aren't any legal rules about who
labors and who doesn't. If you skip work you don’t get tossed in jail, you just get fired. But Marx was a historical
materialist, so in his view, even in feudalism, classes weren’t really defined by laws, they were actually defined by their
place in the relations of production. And when Marx looked at industrial capitalism taking shape around him, he saw two
main classes: the working class (or proletariat) and the capitalists (or the bourgeoisie). The proletariat are defined by the
fact that they don’t own or control the means of production – that is, the materials you need to use in order to labor and
produce goods.

One way of thinking about the means of production is as the inanimate part – the actual, physical stuff – that makes up
the forces of production. So this includes everything from the land to stand on while you work, to the raw materials you
need, like trees, and coal, and iron ore, to the tools and machines you use. To simplify things dramatically, the
proletariat are defined by the fact that, while they work in the factories and use resources to make things, they don’t
own the factories or the things they make. The bourgeoisie are defined by the fact that they do own the factories and
the things that are made in them. They control the means of production and the products that come from them. It’s this
difference in who controls the means of production, Marx said, that leads to exploitation in capitalism, in the form of
wage labor. If the proletariat lack access to the means of production, he argued, then they only have one thing they can
sell: their labor. And they must sell their labor. If they don't, they starve. Now you might argue that, hey, they're being
paid, right?

Well, Marx would counter that they’re only being paid enough to live on, if barely. However, Marx would also argue that
they're being paid less than the worth of what they produce. And it is that difference – between the value of the wage
and the value of what’s produced – which is the source of surplus in capitalism. You know this surplus as profit. And the
bourgeoisie get to decide what to do with the profits. Because of this, Marx believed that the bourgeoisie will always be
looking to make profits as large as possible, both by driving down wages and by driving up productivity. And this leads to
one of the big problems with capitalism: crises. Specifically, crises of overproduction. Other modes of production had
crises, too, but they were caused by not having enough. In capitalism, for the first time in history, there were crises of
having too much. We reached a point where the forces of production were so developed that we could produce far
more than we needed. But the vast majority of people couldn’t afford to buy any of it. And so we had crises where the
economy collapsed, despite the fact that there was more than enough to go around. Crises of overproduction are an
example of what Marx saw in every mode of production: the contradiction between the forces of production and the
relations of production. Marx understood history as a series of advances in the forces of production – like, greater
coordination among capitalists, more technological complexity, and more organizational innovation. But eventually, he
said, those advances always stall, as the forces of production run up against the limits created by the relations of
production. For example, in the early days of capitalism, the relations of production included things like private
ownership of property, competition among capitalists, and wage labor. And these things allowed for explosive economic
growth.
But eventually, these very same things became limitations on the forces of production – stuff like factories, technology,
and human labor. That’s because capitalists drove wages down in pursuit of profit, and they competed with each other,
leading to a lack of coordination in the economy. So you wound up with a population that couldn’t afford to buy
anything, while at the same time being offered way more goods than it would ever need. And, with the economy in
shambles, there's no way for the forces to keep developing – there’s no money to invest in new factories or new
technologies.
So the relations of production that created economic growth became precisely the things that caused crises. Marx saw
this as an impasse that all modes of production eventually meet. So how do you get a society to move past it?
Marx said, the way forward was class conflict. History is a matter of struggling classes, he said, each aligned with either
the forces or relations of production. The bourgeoisie are aligned with the relations of production, he said, because
these relations are what allow them to extract surplus from the workers. So they're quite happy with the situation as it
stands.
But the proletariat want change. They want the further development of the forces of production – of which their labor
makes up a large part – and they want a complete change in the relations of production. They want an end to
exploitation and they want the surplus to benefit them. After all, it was their labor that created the surplus.
In short, they want revolution.
And so this is Marx's model of history: a series of modes of production, composed of forces and relations of production.
These forces and relations develop together until they eventually come into conflict, leading to a revolution by the
oppressed class and the institution of a totally new set of relations, where the workers benefit from the efforts of their
labor. Plenty of theorists followed in Marx’s wake, taking his idea of historical materialism and expanding it to better
deal with some of the areas that Marx had left out. Particularly interesting here is the work of the Italian theorist
Antonio Gramsci, who wrote in the years preceding World War II.

One of the big questions implicit in Marx’s theory is just how the bourgeoisie manages to stay in power so effectively.
And Gramsci answered this with the theory of hegemony. He argued that the ruling class stays in power, in part, through
hegemonic culture, a dominant set of ideas that are all-pervasive and taken for granted in a society. While they’re not
necessarily right or wrong, these ideas shape everyone's understanding of the social world, blinding us to the realities of
things like economic exploitation. But hegemonic ideas don’t need to be economic ones. They could just as easily be
beliefs about gender, or race. And this points to possibly Marx’s biggest impact. While Marx’s model of history is specific
to economic conflict, we can see in it the essence of the broader sociological paradigm of conflict theory.

Conflict theory is the basic idea of looking at power dynamics and analyzing the ways in which struggles over power
drive societal change, as all kinds of groups, not just workers and owners, fight for control over resources.
Marx’s ideas gave rise to a host of conflict theories in sociology, including Race-Conflict Theory, Gender-Conflict Theory,
and Intersectional Theory. These theories give us ways to understand power, control, and freedom in modern society,
and we’re going to be looking at them over the next couple of weeks. But for today, you learned about Karl Marx,
historical materialism and Marx’s basic perspective on history. You also learned about modes of production, their
development, and how they fit into Marx’s overall theory of historical development, along with class struggle and
revolution. And finally, we saw how Marx’s ideas gave rise to Gramsci’s idea of hegemony, and to conflict theories more
generally.
Dubois and Race Conflict

Two bachelor degrees. PhD from Harvard University. Two-year fellowship to study in Berlin. Professor of sociology and
history at two different universities. Author of countless books. Activist and co-founder of a key civil rights organization.
Editor and co-founder of a magazine. And a poet to boot. Pretty good resume, yeah? What if I make it a bit more
impressive? That PhD from Harvard? First Harvard PhD granted to an African American. The civil rights organization? The
NAACP. That magazine? The Crisis, the longest running black publication in the United States, in print since 1910.

This resume belongs to William Edward Burghardt DuBois, whom you might know better as W.E.B. Dubois. He was one
of the earliest American sociologists, as well as one of the first proponents of race-conflict theory. And his studies of the
lives of African Americans during the Jim Crow era of American history – and the oppression they faced – are the
cornerstones of how sociologists study race.

W.E.B. Dubois was born in a small town in Massachusetts in 1868. 1868 – that’s five years after the Emancipation
Proclamation. Three years after the end of the American Civil War. And the same year that the 14th amendment was
passed. At this time, race was considered a biological construct. Slavery, and later the Jim Crow laws – laws in the South
that enforced racial segregation – were framed as natural consequences of the supposed, natural inferiority of Blacks to
Whites. We, of course, now know that this was not just wrong, but deeply harmful. And more than that – the idea that
race itself is a purely biological, immutable quality is also understood today as being simply untrue. Instead, race is
thought of as a socially constructed category of people, who share biological traits that society has deemed important.
Yes, human beings vary a lot in how we look – our skin color, our facial features, our body shapes, our hair texture. But
those visual markers only become a “race” when members of society decide that specific markers constitute a specific
racial group. This is why the concept of race often changes, across cultures and times. For example, when Dubois was
alive, Irish and Italian Americans weren’t considered ‘white,’ either. But today, try telling some Boston Southie guy or an
Italian grandma from Pittsburgh that they’re not white. See what they say. Did something change about Irish and Italian
Americans biologically? Of course not. It’s how society saw them that changed. And it’s that last bit – what race a person
is seen as, and how they’re treated as a result – that ends up being a huge determinant of a person’s social outcomes.
Dubois began to consider his race as a part of his identity, when he moved to the South to go to college, and then spent
several years in Europe. He saw how differently black people were treated in different places, and was disillusioned
about how Americans treated him based on his skin color. He can describe this disillusionment much better than I can:
“One ever feels his twoness,” he wrote, “an American, a Negro; two souls, two thoughts, two unreconciled strivings; two
warring id.” This quote reveals a really critical underlying thread in much of Dubois’ work – the idea of double-
consciousness.

Dubois argued that there are two competing identities as a Black American – seeing one’s self as an American and seeing
one’s self as a Black person while living in white-centric America. Living as a member of a non-dominant race, he said,
creates a fracture in your sense of identity within that society. These feelings are what fueled Dubois’ work, which
focused on the disparities and conflicts between people of different races – what we now call race-conflict theory.
Today, questions of race and identity are studied by sociologists who work on racial identity theory, which looks at how
individuals come to identify as a certain race. Dubois didn’t only research racial identity, though – he also looked at the
everyday lives of black and white Americans and wrote extensively about how and why their lives differed so drastically
in post-slavery America.

Let’s go to the Thought Bubble to look at one of Dubois’ early studies of these disparities.
In 1896, the University of Pennsylvania hired Dubois to do a survey on Black communities in Philadelphia. His work
eventually became ‘The Philadelphia Negro,’ the first published study of the living conditions of African Americans.
Dubois went knocking on doors, asking people questions about themselves and their families. And there were an awful
lot of doors. All told, Dubois collected data on 9,675 African Americans. He focused on one specific ward of Philly – the
7th ward, a historically Black neighborhood that attracted families of all classes, from doctors and teachers, to the poor
and destitute. He sat in thousands of parlors, asking questions about age, gender, education, literacy, occupations,
earnings, crime, and documented the ways in which African-Americans differed from Philly’s white residents.
For example, the Black population turned out to be much younger than the White population and had a higher
proportion of women. It also had lower literacy rates, higher rates of poverty and crime, and a higher concentration of
workers in the service industry than in manufacturing or trade.
Mortality rates were higher, as was the frequency of illness. But here’s what made Dubois’ report especially unique:
He concluded that much of the dysfunction within Black communities came from their inferior access to things like
education and more lucrative jobs. The reason that the black population had higher rates of death and illness, he said,
was because of occupational hazards, and poverty, and less access to health care. It’s hard to express just how radical
Dubois’ conclusions were at the time. The problems in black communities were not due to racial inferiority, Dubois
argued, but to racial prejudice. And that was completely different from how many Americans thought at the time.
Thanks Thought Bubble.

So, race doesn’t exist in a vacuum. It doesn’t just imbue you with certain essential qualities. Instead, race matters
because of the power that society gives it. For another example, let’s stick with Philly and use the labor unions there in
the 1890s. Because of prejudice against Black workers, and beliefs about their abilities and morals, trade labor unions
didn’t allow Black workers to join. And because they couldn’t join unions, many Black workers couldn’t get
manufacturing or trade work – which paid much better than service work. And because they couldn’t get these jobs,
Black communities had more men out of work, higher rates of poverty, and more criminal behavior. Which then allowed
the white workers and unions to justify their decision to not allow black workers into their union. The prevailing beliefs
about race and racism ultimately reinforced themselves. This is what’s now known as racial formation theory, a theory
formalized by modern sociologists Michael Omi and Howard Winant.

Racial formation refers to the process through which social, political, and economic forces influence how a society
defines racial categories – and how those racial categories in turn end up shaping those forces. Omi and Winant argue
that the concept of race came about as a tool to justify and maintain the economic and political power held by those of
European descent. Another modern look at these issues can be seen in the work of sociologist William Julius Wilson.
He explores why Black and White Americans tend to have such different outcomes in terms of income, education, and
more. And he argues that class, not race, is the determining factor for many Black Americans. But the reasons that these
class gaps exist to begin with, come from the structural disadvantages that date back to Dubois’ time. Dubois continued
to research the ways in which prejudice, segregation, and lack of access to education and jobs were holding back African
Americans.

A strong advocate of education and of challenging Jim Crow laws, he clashed with another leading black intellectual of
the time, Booker T. Washington, who advocated compromise with the predominantly white political system.
Over time, DuBois grew frustrated with the limits of scholarship in affecting change,
so he turned to direct activism and political writing.
In 1909, he co-founded the National Association for the Advancement of Colored People or the NAACP, and was the
editor and intellectual driving force behind its magazine, The Crisis.
The NAACP fought against lynching, segregation of schools, voting disenfranchisement, and much more.
It used journalism as one of its most powerful tools, publishing the records of thousands of lynchings over a thirty year
period.
And it used lawsuits, targeting voter disenfranchisement and school segregation in decade-long court battles.
And, after DuBois’ time, it went on to become part of many of the landmark moments in the fight for civil rights,
including the Brown vs. Board of Education case, the Montgomery Bus Boycott and the March on Washington.
Modern sociologists continue Dubois’ work on racial politics, asking the question:
How is race intertwined with political power, and the institutional structures within a society?
Sociologist Eduardo Bonilla-Silva, for example, argues that we now have what he calls “racism without racists.”
What he means is: Explicitly racist views have become less socially acceptable, so fewer people are willing to say that
they don’t think Black and White Americans should have equal rights.
But, as Bonilla-Silva points out, that doesn’t mean racism is a thing of the past.
Instead, he says, structural racism – the kind that’s entrenched in political and legal structures – still holds back the
progress of racial minorities.
Take, for example, the fact that the median wealth of white Americans is 13 times higher than the median wealth of
black Americans.
Now, you could look at that and say, well, black people just aren’t as good at saving as white people. After all, it’s not
like there’s anything legally preventing them from making or saving more money. But that completely ignores the ways
in which wealth builds up over generations. Past generations of Black Americans were unable to build wealth, because
they had far less access to higher incomes, banking services, and housing. These ideas about how the structures of
power interact with race may have their origins in Dubois’ work, but they continue today. And so do his studies of racial
resistance.

Researchers of racial resistance ask: How do different racial groups challenge and change the structures of power?
Sometimes racial resistance is easy to see in society. Think the Civil Rights movement of the 1950s and 60s, or Black
Lives Matter today. But sociologists can also look at more subtle forms of resistance, too, like resistance against racial
ideas and stereotypes.
For example, sociologist Patricia Hill Collins has written about the different relationships that black and white women
have had with marriage and staying home to raise a family. In the feminist movement of the 1960s and 70s, one of its
key issues was the exclusion of women from the workforce. Entering the workforce was seen as a form of resistance.
But Black women have, for most of American history, been forced to work, or needed to work to help support their
families. For them, Collins argues, joining the workforce is not resistance.
Instead, staying at home to care for their families can be an act of resistance against society’s expectations for Black
women. All of these modern fields of study within race-conflict theory – racial identity, racial formation, racial politics,
and racial resistance – they all have their origins in the work of one sociologist: W.E.B. Dubois. Today we talked about
W.E.B. Dubois, one of the founders of sociological thought and the founder of race-conflict theory.
Emile Durkeim on Suicide and Society

So, the fact that we have society at all is kind of amazing. Think about it: People with different interests, different
amounts of money, members of different subcultures, races, and sexual orientations, somehow all manage to hold
together, in this thing we call society. A thing that, at least kind of, works. But it doesn't just hold together. Society has to
somehow endure periods of intense change without falling apart.

Political change, technological change, population growth, economic crises – all these things can be massively disruptive.
Sometimes we might even worry that the fabric of society won't be able to take the stress.
And it’s these questions of how society holds together, and how to understand when it goes wrong, that Émile
Durkheim, one of the founders of sociology, tried to answer.

You know who knows a thing or two about social disruptions? France. Émile Durkheim lived in France from 1858 to
1917, which means that he lived almost his entire life under France's Third Republic, founded in 1871. But, despite being
the third republic, it was the first stable republic in France's history. Between 1800 and 1871, France was governed by
two republics, two monarchies, and two empires. But the turmoil wasn't just political. France was also dealing with
major economic, technological, and cultural changes, as industrialization took hold, and the traditional authority of the
Catholic Church weakened.

Given all this, it should be no surprise that Durkheim was concerned with the question of what kept societies together,
so that he could make sure that his didn't fall apart again. And this was the task of sociology, as he understood it.
Sociology was to be a truly scientific study of society. With it, we could understand its normal and abnormal functioning,
we could diagnose how it was changing, and we could deal with the consequences. To Durkheim, sociology was to
society what biology and medicine were to the human body. He actually thought of society as a kind of organism, made
up of different parts, which all had to function well together in order for that organism to be healthy. This basic
understanding of society in terms of structures that fit together, and which function either well or poorly, makes
Durkheim the founder of the structural functionalist paradigm.
Now, if sociology was to be a true science, then it needed well-defined methods. And Durkheim focused a lot of his
effort on this problem. He was committed to sociology as an empirical endeavor. And his ambitious book, called
“Suicide” is really the first piece of sociological work to use statistical methods as its primary mode of argument.
Durkheim was also the first in the field to think in terms that we now consider standard in sociology. Like, thinking about
the problem of operationalizing variables, and puzzling over how intangible concepts, like social integration or solidarity,
can be reflected in things that we can actually measure. And beyond this question of method lies an even bigger
question:
If sociology is a science, then what does it study? Durkheim thought that any science needed a well-defined object of
study. And the object for Durkheim was the social fact.

In his book, “Rules of Sociological Method,” he defines social facts as "consist[ing] of manners of acting, thinking and
feeling external to the individual, which are invested with a coercive power by virtue of which they exercise control over
him." OK, there are three things to highlight in this definition.
First is the fact that it’s really broad. Social facts include everything from political systems, to beliefs about right and
wrong, to suicide rates, to holiday celebrations, and architectural styles. Second, notice that social facts are external to
the individual. This might seem a little confusing; I mean, how can a way of thinking be external to a person? But what
Durkheim means here is that social facts have a life outside of you or me. For instance, if you give gifts at Christmas,
think for a second about why. That’s not something that you came up with on your own.Giving gifts at Christmas wasn’t
your idea. It’s a social fact, with an existence that’s external to you. If you don’t celebrate Christmas, the strength of
Christmas as a social fact in the US means you’ve probably already experienced the third thing I want to highlight: The
idea that social facts are powerful, and coercive, and they can make you do things you otherwise wouldn’t.

Don’t believe me? Let’s go to the Thought Bubble.


Imagine a hypothetical family at a hypothetical Christmas. None of them want gifts, and all of them have better things to
do than spend money buying gifts for anyone else. In fact, none of them are even that committed to celebrating
Christmas at all. And yet, come Christmas morning there’s a pile of presents under the tree. And there’s a tree there in
the first place! Why? Well, maybe no one was willing to say that they didn't want a gift. Or maybe they all said that, but
they each bought gifts anyway, because they were afraid that the others would too. The point is, the specific
explanation for the behavior in this family doesn't really matter. What's important is that we can see here the power of
a social fact, even in a situation where no one directly involved believes in it! If that's not an external coercive power, I
don't know what is. But this doesn’t just happen with gift giving at Christmas. Social facts include all kinds of things. They
help dictate how you interact with your neighbors and how you relate to society. Social facts and their coercive power
represent a form of social cohesion, which points us back to our original question: how societies hold together and how
they can go wrong.
Thanks, Thought Bubble!
Durkheim's answer to the question of social cohesion is what he called the common or collective consciousness. The
common consciousness is basically the collection of all the beliefs, morals, and ideas that are the social facts in a given
society. And, like with gift-giving at Christmas, these beliefs aren’t necessarily held by everyone. They’re just the beliefs
that hold coercive power. They’re the ideas that people give life to, in their interactions with one another. So, common
consciousness holds a society together. But what are the problems? What is social dysfunction? For Durkheim, if society
is an organism, then dysfunction must be thought of as a disease. Now, you might think that something like crime would
be a social dysfunction. But, by Durkheim’s thinking, crime can't be a disease, because every society has it. So, you might
not like crime, but some amount of crime is normal. In the same way, you might wish you didn't have to sleep, but that
doesn't make sleeping a disease. It's just a normal part of the way the human body works. And just like sleep, Durkheim
argued that crime serves a purpose.

For example, he said that crime helps strengthen the common consciousness. To him, crime and punishment were a
kind of public lesson in right and wrong: When someone is judged and punished, that shows us both society's morals
and how strong those morals are. Crime can also point to possible changes in the common consciousness. When Rosa
Parks refused to give up her seat and move to the back of the bus, she committed a crime. But her crime set off a city-
wide bus boycott that resulted in the law being struck down. So crime in and of itself isn’t necessarily a dysfunction, but,
just like how sleeping 18 hours a day, every day might be a sign of disease, if the level of crime in a society becomes
excessive, it would eventually stop serving these functions, and the society could no longer function normally. And that’s
what social dysfunction is for Durkheim: something that impedes the normal functioning of society.

Since Durkheim is a structural functionalist, social dysfunctions always have larger structural causes – they’re created by
some underlying problem with the social organism. Durkheim applied this idea in his famous book on suicide.
Now, it might be strange to think of suicide as social at all, but Durkheim argued that there was actually a very strong
link between societal structure and people taking their own lives. And he found this link in a dysfunctional aspect of his
society: namely, in a lack of social integration. When Durkheim looked at the statistics on suicide in Europe over the 19th
century, he saw a massive increase, one that coincided with the shift from traditional to modern society. Durkheim
argued that traditional societies – like, those of feudal Europe – were highly socially integrated. People knew their place
in society, what that place meant, and how they related to other people.

But modern society, over the preceding century, had suffered from a loss of social integration. The decreasing
importance of religion, and of other traditional ways of thinking, resulted in a smaller, weaker common consciousness
and a less intense communal life. As a result, people were less strongly bound to their society. They didn’t necessarily
feel they had a place in it and couldn’t understand how they fit. This, Durkheim argued, resulted in a dramatically
increased suicide rate.
Now, suicide is certainly a personal act, motivated by personal feelings or psychological conditions. But Durkheim
showed how these personal feelings were not purely personal, and that they were influenced by the structure of society.
In this case, he argued that the values holding society together were being pulled apart, and so people lost their sense of
place.

Feelings of isolation or meaninglessness could be traced back to large social changes. And Durkheim, in diagnosing the
problem, believed he had a solution. If a high suicide rate was a disease, Durkheim’s prescription was to strengthen
social organizations – especially those based around the workplace, because that’s where people were spending more
and more of their time. He figured that these organizations – sort of like workers’ guilds – could help provide people
with that sense of place that they were lacking. Now, many sociologists today see that Durkheim’s work on suicide was
undermined by the poor quality of statistics at the time.

But still, he used those statistics in new ways, as evidence and tests for theories of society. And you can see in his
research how Durkheim tried to answer big questions. Society is composed of social facts, and bound together by
common consciousness.This normal functioning can evolve, but it can also be disrupted by rapid change.
And that, Durkheim believed, is where sociology steps in. By studying society scientifically, and understanding social
facts, sociologists can diagnose the disease and prescribe the cure.
Harriet Martineau and Gender Conflict Theory

Where my ladies at? Seriously, we’ve spent a lot of time learning about the origins of sociology, and all of the founders
we’ve talked about so far have been men. That’s because, when sociology was becoming an academic discipline, women
didn’t have the same access to education. In fact, it was considered improper in the 19th century for women to write
articles and give talks to the public. And this continued for decades, with some of the top universities not allowing
female students until the 1970s. Which sucks. But it also raises an important question: Why do women and men get
treated differently? This is a question that sociologists can answer! Or, well, we can at least try to answer it. Gender-
conflict theory applies the principles of conflict theory to the relations among genders. Specifically, it looks at how social
structures perpetuate gendered inequalities. Now, the functionalist approach has historically held that gender
inequalities are a natural result of each gender taking on the tasks they’re best suited for. But many modern sociologists
don’t share this view. Economic and political power structures that reinforce traditional gender roles often cause more
dysfunction than function.
Restricting access to education by gender is a great example of this dysfunction: Denying women access to quality
education makes our society worse by squashing the half the world’s potential! Sociology’s understanding of society
wouldn’t be complete without the women and feminists who started the conversation about gender as an academic
field of study.
First stop: sociology’s forgotten founder, Harriet Martineau.

Harriet Martineau was the first female sociologist, born in 1802 in England. Unlike Marx or Durkheim or Weber, who are
hailed as the forefathers of sociology and get entire chapters devoted to their theories, Martineau typically gets, at
most, a couple of sentences in a textbook. Martineau started out kind of like the Crash Course of her time – bringing
research to the masses in easily digestible bites. She wrote a best-selling series called Illustrations on the Political
Economy, which used fables and a literary style of writing to bring the economic principles of Adam Smith to the general
public. She was a favorite of many of the leading intellectuals of the time. Even Queen Victoria, who loved Martineau’s
writing so much that she invited Martineau to her coronation.
But this was just the start.
Martineau decamped for the United States and spent two years travelling the country, observing social practices. She
went from North to South, from small towns to Washington DC, sitting in on sessions of Congress, a Supreme Court
session, and a meeting with President Madison. She then captured her observations in two books, Society in America
and How to Observe Morals and Manners. The first was a set of three volumes that identified principles that Americans
professed to hold dear, like democracy, justice, and freedom. Then she documented the social patterns that she
observed in America, and contrasted the values that Americans thought they held, with the values that were actually
enshrined in their economic and political systems. Martineau’s observations included some of the first academic
observations of American gender roles, and she dedicated much of the third volume to the study of marriage, female
occupations, and the health of women. Despite the title of her second book – How to Observe Morals and Manners – it
was not a guide to etiquette. It was a treatise on research methodology, describing how to do cross-cultural studies of
morals and moral behavior. Martineau talked about interviewing, sampling, bias in observation, the problem of
generalizing from individuals to a whole society – many of the hallmarks of modern research. She wrote about class,
religion, suicide, nationalism, domestic life, gender, crime, health – and this was all before Marx, before Durkheim,
before Weber. And her English translations of Comte’s work on positivist sociology were so good that Comte himself
told her:
“I feel sure that your name will be linked with mine.”

But of course, Comte was wrong. Soon after her death, Martineau’s work was forgotten.
It wasn’t until the 1970s, when feminist scholars began to revisit her work, that the full extent of her influence on
sociology began to be realized. That’s right, feminist scholars. Now, I know for many people feminism is a loaded term.
And I want to make sure we’re clear about the historical and sociological context for feminism as I’m using it here. Here,
we’re defining feminism as the support for social equality among genders. This is in opposition to patriarchy, a form of
social organization in which institutional structures, like access to political power, property rights, and occupations, are
dominated by men. So feminism isn’t just associated with activism; it’s also a scholarly term.

Feminist theory is one school of thought in the study of gender. And over time, feminism has gone through many
different forms, often categorized as waves.

Let’s go to the Thought Bubble to look at what’s known as feminism’s first wave.

In the 19th and early 20th century, the first wave of feminism focused on women’s suffrage – or, the right to vote – and
other legal inequalities. That’s because, in the 19th century, all property and money that a woman had legally belonged
to her husband. Imagine that. Not being able to earn a salary that was your own, not being able to own land, not being
able to write a will. And on top of that, you can’t vote, which makes it a little hard to change these things. It was these
issues that prompted the start of the Women’s Rights Movement, which began with a meeting of 300 like-minded
women – and a few men – in Seneca Falls, New York in 1848. Early feminists Elizabeth Cady Stanton and Lucretia Mott
organized the meeting to put forth a manifesto on women’s rights, which became known as the Declaration of
Sentiments. This convention was the spark that set off the women’s suffrage movement in the United States. It took
many years of activism – court cases, speeches, protests, and hunger strikes – until women finally won the right to vote
in 1920. Thanks, Thought Bubble.

The first wave of feminism didn’t only affect legal issues. It was also where many of the ideas about societal roles of
gender first got their start. Take Charlotte Perkins Gilman, for example. You might recognize her as the author of the
short story “The Yellow Wallpaper.” But she was also a sociologist and social activist. Early in the 20th century, she
published papers and books on society’s assumptions about gender, focusing on marriage, childbearing, and the
assumed roles of women as housekeepers and men as breadwinners. She wrote: “There is no female mind. The brain is
not an organ of sex. Might as well speak of a female liver.” Notice how she worded that – the brain is not an organ of
sex. Sex refers to biological distinctions between females, males, and intersex individuals. But gender refers to the
personality traits and social roles that society attaches to different sexes. Think about it this way: Do men and women
act the same way across all cultures and time periods? If gender arose only from biological differences between men
and women, we would expect to see all cultures defining femininity and masculinity in the same ways.
But we don’t.

From the work of anthropologist Margaret Mead in the 1930s, to the research done today by economists Uri Gneezy and
John List, scientists have found that gender roles change among societies, and over time. And this idea – the idea that
gender has societal origins – has formed the backbone of the second wave of feminism. Books like The Second Sex by
Simone de Beauvoir and The Feminine Mystique by Betty Friedan argued against the idea that women were a lesser sex,
who should be resigned to taking care of children and the home.

The second wave of feminism focused on female participation in the labor force, equal pay, reproductive rights, sexual
violence, educational equality, and divorce. This was the era of Title IX, the legalization of contraception and abortion,
no fault divorce laws, and the Equal Pay Act. But it was also an era of divisiveness within the feminist movement, with
many feeling that women in positions of power focused on issues most relevant to white, upper middle class women.
These divisions led to what’s known as the third wave of feminism, starting in the 1990s, which has focused on
broadening the definition of feminism to encompass issues of race, class, sexuality, and other forms of disadvantage.

The ideas evoked by the third wave are nicely represented by author and feminist bell hooks: In her book “Ain’t I a
Woman,” hooks writes: “The process begins with the individual woman’s acceptance that American women, without
exception, are socialized to be racist, classist and sexist, in varying degrees ....” That’s a heavy statement.
Most people don’t think of themselves as racist or sexist. But one of the underlying ideas behind third wave feminism is
the acknowledgement of the structures of power that create inequality across gender, race, class, and other dimensions
of disadvantage. There’s a term that’s used a lot in modern day feminism, which maybe you’ve heard used recently:
intersectionality. So what is intersectionality? You add a little race-conflict theory in with gender-conflict theory, and a
smidge of Marx’s theories about class conflict – and you get intersectionality, the analysis of how race, class, and gender
interact to create systems of disadvantage that are interdependent. The term intersectionality was coined by race and
gender theorist Kimberlé Williams Crenshaw. She wrote that the experience of being a black woman couldn’t be
understood just by understanding the experience of a black person, or the experience of a woman independently.
Instead, you have to look at how these identities intersect. How you – yes, you, in particular, you – see society and see
yourself is gonna be wrapped up in the identities you have. I, as a cisgender white woman, will have a different
experience in the world as a result of my own interlocking identities.
And when it comes to our understanding of gender in this societal mix, we have to thank Harriet Martineau, whose work
was one starting point from which the waves of feminism unfolded.
Max Weber and Modernity

Take a second and imagine life just over five hundred years ago. Say you’re in Europe, at the tail end of the Middle Ages.
If you had to name the biggest change between then and now, what would you choose? Maybe the internet, or
industrialization, or the incredible advances in health and medicine. Maybe you'd think back to Marx or Durkheim and
say that it was the shift from feudalism to capitalism. These are all good answers. But Max Weber had a different one.
The most important change wasn’t technical, or economic, or political. The biggest change, he said – the one that best
distinguishes the modern world from the traditional one – was a difference in the way we think.

Like most of the theorists we've studied so far, Max Weber lived at the end of the 19th century, and the turbulent
changes of that time influenced him as it did all the others. He lived during the formation of the first German national
state, and watching this process first-hand made him concerned with understanding modern society. So in his work,
Weber examined some of the defining characteristics of the modern world, which sociologists have now spent over a
hundred years studying and arguing about: He focused on ideas like rationalization, bureaucracy and social stratification.
And when he saw where modernity was heading, he was really worried. But to understand what he was so worried
about, you need to understand how we got to modernity in the first place. And for Weber, the real defining features of
the transition from traditionalism and modernity were ways of thinking. Not modes of production or social integration,
but ideas.
To think traditionally is to take the basic set-up of the world as given. In other words, traditionalism sees the world as
having a basic order, and that order is the way things ought to be. We can see this very clearly in feudalism and divine-
right monarchies: The monarch is understood as having been anointed by God, and you owe them your allegiance
regardless of whether they're good or bad at their job. The question of whether or not they deserve the position never
even comes up. But if traditionalism takes things for granted, modernity doesn't.

In modernity, everything is up for grabs. What Weber saw when he looked at history was that societies, and people,
were becoming more rational. They were undergoing a process of rationalization. And Weber's definition of rationality
included three specific things: calculability, methodical behavior, and reflexivity. Calculability means that, if we know the
inputs, we can know the outputs. Just think of a bowl made in a factory versus one you make yourself: Every single bowl
comes out of the factory exactly the same, whereas no two bowls you make by hand are gonna quite match.
Now, the reason that we know the outputs, if we know the inputs, is because there’s methodical behavior involved – a
procedure to follow.
In the factory, the method is in the machines. So the results are going to be the same, no matter who’s pulling the
levers.
Finally, thinking rationally for Weber meant thinking about what you're doing, in other words, thinking reflexively. You're
constantly looking for new ways to improve the process, for new and better and more efficient ways to make bowls.
So, traditional society is the society of individual artisans, each with their own process, which is how it’s always been
done. But modern society is the society of explicit instructions and standardized, methodical, procedures which are
always being reflected on and improved.
So what caused this massive shift in how people think?
What kicked off this process of rationalization?

Weber gave here what might seem like an unlikely answer: religion. In his book The Protestant Ethic and the Spirit of
Capitalism, Weber argued that the transition from traditionalism to modernity began with the Protestant Reformation.
For hundreds of years, the Catholic Church dominated Medieval Europe until, in 1517, a German priest named Martin
Luther denounced corruption in the church. This set off a series of new religious movements that radically opposed
Catholic dogma. This was the Protestant Reformation.
And it’s in these new movements that Weber saw the origin of modern rationality. Catholicism, after all, was the basis
for the traditional worldview. Everything in medieval life – from the structure of the social order to the way you farmed
– was the way it was, because God willed it. By contrast, in Lutheranism, you still have a divinely sanctioned place in the
world, but for the first time, the question of how well you are performing your role became important. This idea – of
personal responsibility – opened the way for another important figure in Weber’s view of history: John Calvin.

Calvin didn’t believe that God could possibly be concerned with anything that one measly little human might do. Instead,
Calvin believed in predestination, the idea that your fate, whether you’re saved or damned, has already been set by
God, from the beginning of the universe. And there's nothing you can do to change it: you're either one of the “elect” or
you’re not. So, how do you know? Through what Calvinists called a "proof of election.” And here’s where personal
responsibility really comes in: The proof that you were saved was to be found in how you lived your life. So the point of
your life was no longer that it was divinely appointed – it became a matter of how well, or how much, you work. And, by
extension of this logic, success itself became proof of election: If you’re financially successful, then that was a sign that
you were blessed by God! Suddenly you didn’t just work until your needs were met, as you did in traditional society.
Now the work was an end in itself, and you worked to accumulate as much wealth as possible, because wealth proved
you were saved. This is the sociological consequence of the Protestant Reformation that Weber studied and understood:
It transformed a communal, traditional society into an individualistic, capitalist society – one that was focused on
economic success.

And I don’t know if you’ve noticed, but modern capitalism is nothing if not rational. Think about it: You must work
methodically in your calling. You must constantly be reflecting on your work, in order to work more efficiently and
productively. And you use profit as a calculable measure of your success. So rationalization gets its start in religion,
around questions of how we work. But Weber spent his career showing how all of society came to be organized
rationally.
In fact, if you've ever been to the DMV, you've seen what Weber argued was one of the biggest impacts of
rationalization in society: the rise of bureaucracy. Bureaucracy is a key part of the transition from the traditional to the
modern state, and Weber identified six traits that make it both extremely rational and very efficient: It’s composed of a
hierarchy of positions with an extremely clear chain of command. This hierarchy is made up of a variety of very
specialized roles and is held together by formal, written communications. The people in the bureaucracy accomplish
their work with technical competence, according to detailed rules and regulations, and they do it without regard to the
particular people they're serving – in other words, they do it impersonally. And I can’t think of a better example of all
these traits than the bureaucracy you see at the DMV. The workers do their jobs competently and according to the rules,
and they treat you just like they would treat anyone else, without regard for your personal characteristics, that is,
impersonally.
But there isn't only this change in the way the state works, there's also a change in the way the state is obeyed.
In a traditional society, Weber believed that the state ruled through traditional legitimacy: people followed the king
because that's how it had always been done. But the modern state works differently: It rules through a combination of
what we call legal-rational and charismatic legitimacy.

Legal-rational legitimacy is essentially a belief in the system itself. You follow the rules because they are the rules.
So, the DMV employee doesn't ask for a certain piece of paperwork because that's how it's always been done. He does it
because that's how the procedure instructs him to do it. If the manual were rewritten tomorrow, he’d do it that way
instead.
But there's a problem with legal-rational legitimacy, and with bureaucracy in general: If it's all about following the rules,
well, somebody needs to make the rules. That's where charismatic legitimacy comes in. You follow the commands of a
charismatic leader because of the extraordinary characteristics of that person. So the modern state is an apparatus of
rules which are ultimately directed by a group of charismatic leaders. In the US, for instance, when we go to the polls
every four years, we're making a choice about who’s going to direct the bureaucracy, and we make that choice based on
the characteristics of the people running. And here’s another one of Weber’s major contributions to sociology:

The idea that the people who run to be leaders of our bureaucracies, do so with the support of political parties. For
Weber, political parties were a key example of social stratification, or the way that people in society are divided
according to the power they hold. Weber didn’t think that society was divided purely based on economic classes, or your
relationship to the means of production, like Marx did. He argued that the system was more complicated, and consisted
of three elements.
Like Marx, he included class, but he didn’t think that classes had unifying interests. Weber also included political parties,
defined broadly as groups that seek social power. And finally, he included status groups, defined by social honor, which
includes things like respect and prestige.

All three of these things, Class, Power, and Status, affected a person’s place in society. More importantly, each of these
elements of Social Stratification could vary independently. So there could be a poor priest, say, who is high in social
prestige, but of a low class. Or a lottery winner, who is of a high class but low in status. Or a bureaucrat in a political
party, who has some measure of political power, but not necessarily money or status. And then there are those who can
turn their fame, or status, into political power. Unlike Marx, Weber didn’t take a particularly critical stand on
stratification in society. But that doesn’t mean he didn’t see its problems. For Weber, rationalization was the defining
feature of the modern age, and he was deeply worried about it. Remember, rationalization is about three things:
calculability, methodical behavior, and reflexivity. But it's really easy to lose reflexivity – to stop reflecting on your work
or your role – and instead become locked in a calculated routine that becomes meaningless and unthinking. Weber
worried that the systems that rationalization built will leave behind the ideas that built them, and that they’ll simply roll
on forever, meaninglessly, under their own momentum. He worried that we'll become locked in what he called an "iron
cage" of bureaucratic capitalism, from which we can’t escape; our lives will become nothing but a series of interactions
based on rationalized rules with no personal meaning behind them. This worry about meaning, and the concern for
ideas and how they shape our reality, is one of the big influences that Weber handed down to future sociologists. On the
micro level, these ideas were picked up by what’s known as the symbolic interactionist paradigm and theorists like
Erving Goffman. Meanwhile, theorists like Talcott Parsons and Jürgen Habermas took on the more macro version of
these questions, looking at processes like rationalization and bureaucratization and culture more generally.
Symbols, Values and Norms

You’re about to cross a street. What do you do? If there are no cars coming, do you stay at the crosswalk, waiting for the
light to change? Or do you just go for it? Do you look left first before you cross, or do you look right? Or maybe you just
dart across the street, shouting, ‘Hey I’m walking here!’ No matter what you do in this situation, what you do is going to
depend on culture. Now you may be thinking, how can something like crossing the street be a cultural phenomenon?
Isn’t culture, like, opera and galas and fancy art openings with tiny hor d’oeuvres? Or maybe you think culture is bigger
than all that, that culture is your heritage, traditions that have been passed down for generations, like Quinceañeras, Bar
Mitzvahs, or Sweet Sixteen parties. The fact is, all of these things – street-crossing, fine arts, and traditional rites of
passage – they are all part of culture.

Culture is the way that non-material objects – like thoughts, action, language, and values – come together with material
objects to form a way of life. So you can basically break culture down into two main components: things and ideas.
When you’re crossing the road, you can see markers of your culture in the things around you – the street signs, the
width of the road, the speed and style of the cars. This is material culture, the culture of things. Books, buildings, food,
clothing, transportation. It can be everything from iconic monuments like the Statue of Liberty to something as simple as
a crosswalk sign that counts down how many seconds you have to cross the street. But a lot of the culture that’s packed
into crossing the street is non-material, too. We interpret the color red to mean stop – because our culture has assigned
red as a symbol for stop and green for go. And if you grew up in a country where cars drive on the right side of the road,
your parents probably taught you to look left first before crossing. This is non-material culture, the culture of ideas.
It’s made up of the intangible creations of human society – values, symbols, customs, ideals. Instead of the Statue of
Liberty, it’s the idea of liberty and what it means to be free. For our purposes as sociologists, we’ll mainly be focusing on
this second type of culture and its three main elements: symbols, values and beliefs, and norms. Symbols include
anything that carries a specific meaning that’s recognized by people who share a culture.
Like a stop sign. Or a gesture. If I do this [holds up one hand, palm out, then just 1 finger], you probably know that I
mean: hold on a sec. Non-verbal gestures like this are a form of language, which is itself a symbolic system that people
within a culture can use to communicate.

Language is more than just the words you speak or write – and it’s not just a matter of English or French or Arabic. The
type of language you use in one cultural setting may be entirely different than what you’d use in another. Take how you
talk to people online. New linguistic styles have sprung up that convey meaning to other people online, because internet
culture. See, there’s one right there!

If you’re internet fluent, me saying ‘because’ and then a noun makes perfect sense, as a way of glossing over a
complicated explanation. But if you’re not familiar with that particular language, it just seems like bad grammar.
Whether it’s written, spoken or non-verbal, language allows us to share the things that make up our culture, a process
known as cultural transmission. And one view of language is that it not only lets us communicate with each other, but
that it also affects how people within a culture see the world around them. This theory, known as the Sapir-Whorf
hypothesis, argues that a person’s thoughts and actions are influenced by the cultural lens created by the language they
speak.

Let’s go to the Thought Bubble to see an example of the Sapir-Whorf hypothesis in action. What gender is the moon?
For English speakers, this question might just conjure images of the man in the moon, but in many languages, nouns
have genders. And in some languages, the moon is feminine, like the Spanish ‘la luna’. But in others, the moon is
masculine, like the German ‘der mond.’ And this affects how Spanish and German people perceive the moon! In one
study, Spanish and German people were asked to rate objects – which were gendered in their language – with reference
to certain traits. Like, is the moon beautiful? Is the moon rugged? Is the moon forceful? The study found that for those
whose language used a masculine article, objects were more strongly associated with stereotypically masculine traits,
like forcefulness. Another study found that when a name was assigned to an object, and the name matched the gender
of the word for it, it was easier for people to remember the name. Like, “Maria Moon” tended to be remembered more
readily by Spanish-speakers than by German speakers. Thanks, Thought Bubble.

Now, I should mention that the Sapir-Whorf hypothesis is one that researchers are divided on. Benjamin Lee Whorf –
the American linguist who helped shape this theory – did his original research on indigenous languages like Hopi and
Inuit. And since then, anthropologists have argued that some of his findings don’t hold up. For example, Whorf famously
claimed that because the Hopi language describes time differently, the Hopi people think of time differently. But
anthropological evidence about the Hopi people suggests otherwise. And Whorf’s study led to a strange, and false,
stereotype that Hopi people, quote, “have no sense of time.” Sociology is an evolving field, and academic disagreements
like this are just one reason that we study language and how it shapes our society. But if language helps us
communicate, shape, and pass on culture, the next element of culture is what helps us organize culture into moral
categories. Values are the cultural standards that people use to decide what’s good or bad, what’s right or wrong. They
serve as the ideals and guidelines that we live by. Beliefs, by contrast, are more explicit than values – beliefs are specific
ideas about what people think is true about the world.

So for example, an American value is democracy, while a common belief is that a good political system is one where
everyone has the opportunity to vote. Different cultures have different values, and these values can help explain why we
see different social structures around the world. Western countries like the United States tend to value individualism
and stress the importance of each person’s own needs, whereas Eastern countries like China tend to value collectivism
and stress the importance of groups over individuals. These different values are part of why you’re more likely to see
young adults in the US living separately from their parents and more likely to see to multi-generational households in
China.

Cultural values and beliefs can also help form the guidelines for behavior within that culture. These guidelines are what
we call norms, or the rules and expectations that guide behavior within a society. So giving up your seat for an elderly
person? Great. Picking your nose in public? Gross. These are two ways of talking about norms. A norm simply relates to
what we think is “normal”, whether something is either culturally accepted, or not. And we have three main types of
norms! The first are what we call folkways. Folkways are the informal little rules that kind of go without saying.
It’s not illegal to violate a folkway, but if you do, there might be ramifications – or what we call negative sanctions. Like,
if you walk onto an elevator and stand facing the back wall instead of the door. You won’t get in trouble, but other
people are gonna give you some weird looks. And sometimes, breaking a folkway can be a good thing, and score you
some positive sanctions from certain parts of society. Like, your mom might ground you for getting a lip ring, but your
friends might think it’s really cool. Another type of norm are mores, which are more official than folkways and tend to be
codified, or formalized, as the stated rules and laws of a society. When mores are broken, you almost always get a
negative sanction – and they’re usually more severe than just strange looks.
Standing backward in the elevator might make you the office weirdo, but you’ll probably get fired if you come into work
topless, because there are strict rules about what kinds of clothing – or lack thereof – are appropriate for the workplace.
Hawaiian shirts – probably not. No shirt? You’re fired.
But mores aren’t universal. You may get fired for showing up without a shirt at work, but men can lay on the beach
shirtless, or walk down the street with no problem.

For women, these norms are different. In the United States, cultural norms about women’s bodies and sexuality mean
that it’s illegal for women to go topless in public. But then in parts of Europe, social norms are more lax about nudity,
and it’s not uncommon for women to also be shirtless at the beach. The last of type of norm is the most serious of the
three: taboo. Taboos are the norms that are crucial to a society’s moral center, involving behaviors that are always
negatively sanctioned. Taboo behaviors are never okay, no matter the circumstance, and they violate your very sense of
decency.
So, killing a person: taboo or not? Your first instinct might be to say, yes, killing is awful. But, while most cultures agree
that life is sacred, and murder should be illegal, it’s not always considered wrong. Most societies say it’s okay to kill in
times of war or in self defense. So what is a taboo? Cannibalism, incest, and child molestation are common examples of
behavior we see as taboo. Yes, you can kill someone in self-defense, but if you pull a Hannibal Lector and eat that
person, you’re going to jail, whether it started as self-defense or not. So don’t do that. Ever. Norms like these and many
others help societies function well, but norms can also be a kind of constraint, a social control that holds people back.
Some norms can be bad, like ones that encourage unhealthy behavior like smoking or binge drinking. But not all norms
have clearly defined moral distinctions – like the way a culture’s emphasis on competition pushes people toward
success, but also discourages cooperation. And that’s the tricky thing about culture.
Most of the time you don’t notice the cultural forces that are shaping your thoughts and actions, because they just seem
normal. That’s why sociologists study culture! We can’t notice whether our values and our norms are good or bad unless
we step back and look at them with the analytical eye of a sociologist.
Cultures, Subcultures, and Countercultures

How many cultures are there in the world? We’ve talked a lot about the things that make a culture a culture – things like
norms and symbols and languages. But we haven’t really discussed how you lump all those little things together and say,
yes, these are the things that belong together – these things are culture A, and these other things are culture B.
So, what are the rules of culture? Well, culture isn’t just about nationality, or the language you speak.
You and another person can live in the same country and speak the same language, and still have totally different
cultural backgrounds. Within a single country, even within a single city, you see lots of different cultures, and each
person’s cultural background will be a mishmash of many different influences. So, there really isn’t – and never will be –
a single, agreed-upon number of cultures that exist in the world. But that doesn’t mean we can’t recognize a culture,
and understand cultural patterns and cultural change, and think about how different cultures contribute to the
functioning of society.

Are you more likely to spend your free time at a football game, or at a modern art gallery? Do you watch NCIS or True
Detective? Do you wear JC Penney or J Crew?
These distinctions – and many more like them – are just one way of distinguishing between cultural patterns, in terms of
social class. Because, yes, Class affects culture, and vice versa. So one way of looking at culture is by examining
distinctions between low culture and high culture. And OK, yeah, those are kinda gross sounding terms.
But I want to be clear: High culture does not mean better culture. In fact, so-called low culture is also known as popular
culture, which is exactly what it sounds like: Low or popular culture includes the cultural behaviors and ideas that are
popular with most people in a society. High culture, meanwhile, refers to cultural patterns that distinguish a society’s
elite.
You can sort of think of low culture versus high culture as the People’s Choice Awards versus the Oscars.
The Hunger Games probably weren’t gonna be winning Best Picture at the Oscars. But they were massive blockbusters,
and the original movie was voted the best movie of 2012 by the People’s Choice Awards. By contrast, the winner of Best
Picture at the Oscars that same year was The Artist, a black and white silent film produced by a French production
company. Very different movies, very different types of culture. Now, you can also look at how different types of cultural
patterns work together.

The Hunger Games and The Artist may appeal to different segments of society, but ultimately, they both fit into
mainstream American media culture. Mainstream culture includes the cultural patterns that are broadly in line with a
society’s cultural ideals and values. And within any society, there are also subcultures – cultural patterns that set apart a
segment of a society’s population. Take, for example, hipsters! They make up a cultural group that formed around the
idea of rejecting what was once considered “cool,” in favor of a different type of cultural expression. Yeah, your beard
and your fixed-gear bike, or your bleach blonde hair and your thick-framed glasses – they’re all part of the material
culture that signifies membership in your own specific sub-culture. But, who decides what’s mainstream and what’s a
sub-culture?
I mean, the whole hipster thing has gone pretty mainstream at this point. Typically, cultural groups with the most power
and societal influence get labelled the norm, and people with less power get relegated to sub-groups. The US is a great
example of this. In large part because of our history as a country of immigrants, the US is often thought of as a “melting
pot,” a place where many cultures come together to form a single combined culture. But how accurate is that? After all,
each subculture is unique – and they don’t necessarily blend together into one big cohesive culture just because we
share a country. And more importantly, some cultures are valued more than others in the US. For example, everyone
gets Christmas off from school, because Christian culture holds a privileged role in American society. That might not
seem fair, if you’re a member of a sub-culture that isn’t folded into mainstream culture. So, it's not really a melting pot if
one flavor is overpowering all the other flavors. And this brings me to another subject: How we judge other cultures, and
subcultures.
Humans are judgmental. We just are. And we’re extra judgmental when we see someone who acts differently than how
we think people should act.

Ethnocentrism is the practice of judging one culture by the standards of another.


In recent decades, there’s been growing recognition that Eurocentrism – or the preference for European cultural
patterns – has influenced how history has been recorded, and how we interpret the lives and ways of people from other
cultures.
So what if, rather than trying to melt all the cultures into one, we recognize each individual flavor? One way to do this is
by focusing research on cultures that have historically gotten less attention. For example, afrocentrism is a school of
thought that re-centers historical and sociological study on the contributions of Africans and African-Americans. Another
option is expanding and equalizing your focus. Instead of looking at behavior through the lens of your own culture, you
can look at it through the lens of multiculturalism – a perspective that, rather than seeing society as a homogenous
culture, recognizes cultural diversity while advocating for equal standing for all cultural traditions.

In this view, America is less a “melting pot” and more like a multicultural society. Still, the ways in which cultures and
subcultures fit together – if at all – can vary, depending on your school of thought as a sociologist. For example, from a
structural functionalist perspective, cultures form to provide order and cohesiveness in a society. So in that view, a
melting pot of cultures is a good thing. But a conflict theorist might see the interactions of sub-cultures differently.
Prioritizing one sub-culture over another can create social inequalities and disenfranchise those who belong to cultures
that are at odds with the mainstream. It’s hard to encourage individual cultural identities without promoting
divisiveness.
In the US at least, it’s a constant struggle. But sometimes, sub-groups can be more than simply different from
mainstream culture – they can be in active opposition to it. This is what we call a counter-culture. Counter-cultures push
back on mainstream culture in an attempt to change how a society functions.

Let’s go to the Thought Bubble to take a trip back to one of the biggest counter-cultural periods of the 20th century: the
1960s. In the United States, the 1960s were rife with countercultures. It was a time of beatniks, and hippies, of protests
against the Vietnam war, and of protests for civil rights and women’s liberation. These movements were often led by
young people and were seen as a rebellion against the culture and values of older generations. This was the era of free
love, where people embraced relationships outside of the traditionally heterosexual and monogamous cultural norms.
Drug use – especially the use of psychedelic drugs – was heavily associated with this sub-culture and was celebrated in
its popular culture – think Lucy in the Sky with Diamonds or the Beat authors’ books about acid trips. But this counter-
culture was also a push back politically against mainstream culture. Many cornerstones of the politics of the American
left have their origins in the counter-culture of the 1960s: anti-war, pro-environmentalism, pro-civil rights, feminism,
LGBTQ equality. From the Stonewall riots to the Vietnam war protests, ‘60s counter-culture was where many of these
issues first reached the public consciousness.
Thanks Thought Bubble!

So, counter-cultures can often act as catalysts for cultural change, especially if they get big enough to gain mainstream
support. But cultures change all the time, with or without the pushback from sub-cultures and counter-cultures. And
different parts of cultures change at different speeds. Sometimes we have what’s called a cultural lag, where some
cultural elements change more slowly than others. Take how education works, for example.
In the US, we get the summer off from school. This is a holdover from when this was a more agricultural country, and
children needed to take time off during harvest. Today, there’s no real reason for summer vacation, other than that’s
what we’ve always done. So how does cultural change happen? Sometimes, people invent new things that change
culture.
Cell phones, for example, have revolutionized not just how we make phone calls, but how we socialize and
communicate. And inventions don’t just have to be material. Ideas, like about money or voting systems, can also be
invented and change a culture. People also discover new things. When European explorers first discovered tomatoes in
Central America in the 1500s and brought them back to Europe, they completely changed the culture of food.
What would pizza be without tomatoes?! A third cause of cultural change comes from cultural diffusion, which is how
cultural traits spread from one culture to another.

Just about everything we think of as classic “American” culture is actually borrowed and transformed from another
culture. Burgers and fries? German and Belgian, respectively. The American cowboy? An update on the Mexican
vaquero. The ideals of liberty and justice for all enshrined in our founding documents? Heavily influenced by French
philosophers like Rousseau and Voltaire, and British philosophers like Hobbes and Locke, as well as by the Iroquois
Confederacy and its ideas of representative democracy. Whether we’re talking about material culture or symbolic
culture, we’re seeing more and more aspects of culture shared across nations and across oceans. As symbolic
interactionists see it, all of society is about the shared reality – the shared culture – that we create. As borders get
thinner, the group of people who share a culture gets larger. Whether it’s the hot dogs we get from Germany or the jazz
and hip hop coming from African traditions, more and more cultures overlap as technology and globalization make our
world just a little bit smaller. And as our society becomes more global, the questions raised by two of our camps of
sociology, structural functionalism and conflict theory, become even more pressing.
Are the structural functionalists right? Does having a shared culture provide points of similarity that encourage
cooperation and help societies function? Or does conflict theory have it right? Does culture divide us, and benefit some
members of society more than others? In the end, they’re both kind of right. There will always be different ways of
thinking and doing and living within a society – but culture is the tie that binds us together.
Social Stratification

Imagine two people. Two extremely wealthy people. One of them inherited their money, acquiring it through the luck
that comes with being born to owners of immense amounts of property and wealth. And the other person worked for
what they have. They started at the bottom, and through years of hard work and clever dealing, they built a business
empire. Now: which one would you say deserves their wealth? Sociologically, the interesting thing here isn't your
answer, not really. It's the fact that different societies in different times and places have different answers to this
question. Because the question of what it means to deserve wealth, or success, or power, is a matter of social
stratification.

Social stratification is what we’re talking about, when we talk about inequality. It's a system by which society categorizes
people, and ranks them in a hierarchy. Everything from social status and prestige, to the kind of job you can hold, to
your chances of living in poverty, are affected by social stratification. That’s because, one of the first principles of social
stratification is that it’s universal, but variable. It shows up in every society on the planet, but what exactly it looks like –
how it divides and categorizes people, and the advantages or disadvantages that come with that division – vary from
society to society.
Realizing that social stratification exists in every society brings us to another principle: that stratification is a
characteristic of society, and not a matter of individual differences. People are obviously all different from each other, so
we might assume that stratification is just a kind of natural outcome of those differences, but it's not.
We know this because we can see the effects of social stratification on people, independent of their personal choices or
traits: For example, children of wealthy families are more likely to live longer and be healthier, to attend college, and to
excel in school than children born into poverty. And they’re also more likely to be wealthy themselves when they grow
up.
And this highlights another key principle of social stratification: It persists across generations. So, stratification serves to
categorize and rank members of society, resulting in different life chances. But generally, society allows some degree of
social mobility, or changes in position within the social hierarchy. People sometimes move upward or downward in
social class, and this is what we usually think of when we talk about social mobility. But more common in the United
States is horizontal mobility, changing positions without changing your standing in the social hierarchy.

This generally happens when a person moves between jobs that pay about the same and have about the same
occupational prestige. Like stratification itself, social mobility isn't just a matter of individual achievement; there are
structural factors at play, too. In fact, we can talk specifically about structural social mobility: when a large number of
people move around the hierarchy because of larger societal changes. When a recession hits, and thousands of people
lose their jobs and are suddenly downwardly mobile, that's structural mobility. But stratification isn't just a matter of
economic forces and job changes. Which brings us to another aspect of social stratification: It isn't just about economic
and social inequalities; it’s also about beliefs. A society’s cultural beliefs tell us how to categorize people, and they also
define the inequalities of a stratification system as being normal, even fair. Put simply: if people didn't believe that the
system was right, it wouldn’t last.
Beliefs are what make systems of social stratification work. And it’s these beliefs about social stratification that inform
what it means to deserve wealth, or success, or power. These four principles give us a better understanding of what
social stratification is, but they still haven't told us much about what it looks like in the real world. So, sociologists
classify stratification systems as being either closed or open. Closed systems tend to be extremely rigid and allow for
little social mobility.
In these systems, social position is based on ascribed status, or the social position you inherit at birth. On the other
hand, open systems of stratification allow for much more social mobility, both upward and downward. Social position
tends to be achieved, not ascribed. Now, these terms are pretty theoretical, so let’s look at some examples of more
closed or open systems, as well as societies that fall in the middle. The archetypal closed system is a caste system. Of
these, India's caste system is probably one of the best known. And while it’s a social system of decreasing importance, it
still holds sway in parts of rural India, and it has a strong legacy across the country.

Let’s go to the Thought Bubble:


The traditional caste system contains four large divisions, called varnas: Brahman, Kshatriya, Vaishya, and Sudra.
Together these varnas encompass hundreds of smaller groups called jatis at the local level. The caste system in its
traditional form is a clear example of an extremely rigid, closed, and unequal system. Caste position not only determined
what jobs were acceptable, but it also strongly controlled its members’ everyday lives and life outcomes. The system
required endogamy, or marriage within your own caste category. And in everyday life, the caste system determined who
you could interact with and how, with systems of social control restricting contact between lower and higher castes.
And this whole system was based on a set of strong cultural and religious beliefs, establishing caste as a right of birth
and living within the strictures of your caste as a moral and spiritual duty.
Thanks Thought Bubble.

We see a variation of the caste system in feudal Europe with the division of society into three orders or estates: the
nobility, the clergy, and the commoners. Again, a person's birth determined his social standing; commoners, for
instance, paid the most taxes and owed labor to their local lord. So they had little expectation that they’d rise above
their station.
The whole social order was justified on the belief that it was ordained by god, with the nobility ruling by so-called divine
right. Both caste systems use ancestry and lineage as a main principle of social stratification, but race has also been used
as the main distinction in closed social systems. The South African system of apartheid, for instance, maintained a legally
enforced separation between black people and white people for decades. Apartheid denied black people citizenship, the
ability to own land, and any say whatsoever in the national government. The Jim Crow laws of the American South were
another example, as was slavery before that.

In contrast with caste systems, class systems are the archetypal open systems. They aren't based solely on ascribed
status at birth. Instead they combine ascribed status and personal achievement in a way that allows for some social
mobility. Class is the system of stratification we have in American society. The main difference between caste and class
systems is that class systems are open, and social mobility is not legally restricted to certain people. There aren't
formally defined categories in the same way there are in the Traditional Indian Caste system. Being in the “under-class”
in the U.S. is not equivalent to being an “untouchable” from India. In class systems, the boundaries between class
categories are often blurred, and there’s greater opportunity for social mobility into and out of class positions. The
American system of stratification is founded on this very idea, in fact: that it’s possible, through hard work and
perseverance, to move up the social hierarchy, to achieve a higher class standing. And this points to another difference
in systems of stratification:
Instead of ancestry, lineage, or race being the key to social division, the American system has elements of a meritocracy,
a system in which social mobility is based on personal merit and individual talents. The American dream is that anyone,
no matter how poor, can "pull themselves up by their bootstraps" and become upwardly class mobile, through nothing
but hard work and gumption.

The American system is certainly more meritocratic than feudal Europe or traditional India; but the idea of meritocracy
is as much a justification for inequality as it is an actual principle of stratification. In an open, class-based system of
stratification, it’s easy to believe that anyone who’s not upwardly mobile deserves their poverty. Because a meritocratic
class system is supposed to be open, it’s easy to ignore the structural factors that influence class standing. But just as the
Indian caste system and feudal estate system placed their limits on certain groups, the American class system limits just
how far hard work can take some people. The US class system tends to reproduce existing class inequalities, because the
advantages that you start with have an incredibly powerful impact on where you can end up. This is part of the reason
that the US is still stratified along race and gender lines. That said, these inequalities are no longer explicitly enshrined in
the law, which is an example of the greater openness of class systems. Because of this openness, class systems also have
a greater likelihood of opportunity for individuals to experience status inconsistency: a situation where a person’s social
position has both positive and negative influences on their social status.

Stratification isn’t just a matter of one thing after all. When we talk about socioeconomic status, for instance, we’re
including three things: income, education, and occupational prestige. An example of status inconsistency is an adjunct
professor who’s very well educated, but earns a low income. There’s an inconsistency among these different aspects of
their social status; low income tends to decrease social status while at the same time, a high level of education and the
societal respect for the occupation of college professor improves social status. All these comparisons between closed
and open systems might make it sound like they’re totally different: a system is either one or the other. But really
they’re two poles on a spectrum. Not every society is strictly a caste system or a class system. Modern Britain, for
instance, is a good illustration of a mixed system of stratification. It still maintains a limited caste system of nobility as a
legacy of the feudal system of estates, which survives alongside, and helps reinforce, a class system similar to what we
have in the U.S.
And some systems of stratification even claim that its citizens are entirely equal, as the Soviet Union did. Following the
Russian Revolution of 1917, the USSR was established as a theoretically classless society. But inequality is more than just
economic. And Soviet society was stratified into four groups, each of which held various amounts of political power and
prestige: apparatchiks or government officials, intelligentsia, industrial workers, and the rural peasantry.
So, like I mentioned before, stratification is universal, but variable. If you want to study a society, one of the things that
you need to look at is that way that it’s stratified, and whether, and how, social mobility occurs.
Social Mobility

Everyone loves a good rags to riches story. Books and movies and music are full of this idea. Whether it’s Gatsby turning
himself from a nobody to a somebody, or Drake starting from the bottom, there’s something appealing about the idea
that anyone can make it, if they try hard enough. And more than maybe anywhere else, that idea is embraced in the
United States, where the mythos of the land of opportunity is practically part of our foundation. But is the US a land of
opportunity? Can anyone move up the rungs of the social ladder? Or is the American Dream just that: a dream? To get a
handle on the answer, we have to understand changes in social position – or what sociologists call social mobility.

There are a few different types of social mobility, so let’s get some definitions straight first. Intragenerational mobility is
how a person moves up or down the social ladder during their lifetime. Intergenerational mobility, however, is about
movement in social position across generations. Are you doing better or worse than your parents were when they were
your age? There’s also absolute versus relative mobility.

Absolute mobility is when you move up or down in absolute terms – are you better or worse than before? Like, if you
make $50,000 a year now and made $40,000 10 years ago, you experienced upward mobility in an absolute sense. But
what if all your peers who were making the same amount ten years ago are now making $65,000 a year? Yes, you’re still
better off than you were 10 years ago, but you’re doing worse relative to your peers.

Relative mobility is how you move up or down in social position compared to the rest of society. We can measure social
mobility quantitatively, using measures of economic mobility, like by comparing your income to your parent’s income at
the same age. Or we can look at mobility using qualitative measures. A common measure used by sociologists is
occupational status. If your father worked in a blue collar job, what’s the likelihood that you will too? A recent study of
absolute intergenerational mobility found that about one-third of US men will end up in the same type of job as their
fathers, compared to about 37% who are upwardly mobile, and 32% who are downwardly mobile.
It’s pretty common to remain within the same class group as your parents. About 80 percent of children experience
what’s called horizontal social mobility, where they work in a different occupation than their parents, but remain in a
similar social position.
So, how much social mobility is there in the US? Well, there’s good news and bad news. The good news is that if we
zoom out and look at absolute mobility across the years, the long term trend in social mobility is upwards. Partially
because of industrialization, median annual family income rose steadily throughout the 20th century, going from around
$34,000 in 1955 to $70,000 in 2015. Standards of living now are much better than they were 60 years ago.
Unfortunately, more recent trends in social mobility have been less rosy. Since the 1970s, much of the economic growth
in income has been at the top of the income distribution. Meanwhile, family incomes have been pretty flat for the rest
of the population. This unequal growth in incomes has meant less absolute mobility for Americans. A recent analysis of
tax data by a group of economists and sociologists found that absolute mobility has declined over the last half century.
While 90% of children born in the 1940s earned more than their parents as adults, only 50% of children born in the
1980s did.
The other bad news is that within a single generation social mobility is stagnant. While people generally improve their
income over time by gaining education and skills, most people stay on the same rung of the social ladder that they
started on. Of those born in the bottom income quintile, 36% remain in the bottom quintile as adults. Only 10% of those
born at the bottom end up in the top quintile as adults. Started at the bottom, now we’re probably still at the bottom,
statistically speaking. And socioeconomic status is sticky at the top, too. Researchers at the Brookings Institution,
including Crash Course Sociology writer Joanna Venator, found that 30% of those born in the top quintile stay in the top
quintile as adults.
Plus, social mobility differs by race/ethnicity, gender, and education. White Americans see more upward mobility than
Black Americans: half of Black Americans that are born at the bottom of the income distribution are still in the bottom
quintile at age 40. Black Americans also face higher rates of downward mobility, being more likely to move out of the
middle class than White Americans:

Let’s go to the Thought Bubble to take a look at research on race and social mobility in action.
In 1982, American sociologists Karl Alexander and Doris Entwisle began following the lives of a random sample of 800
first grade students growing up in a variety of neighborhoods in the Baltimore area. What began as a study meant to last
only three years eventually ended up lasting 30 years, as the researchers followed up with the kids throughout their
lives, to see the paths that their early circumstances put them on. Alexander and Entwisle collected data on everything
imaginable, interviewing the kids yearly about who they lived with, where they lived, work history, education, drug use,
marriage, childbearing, you name it. And what they found was that poverty cast a long shadow over the course of these
kids’ lives. 45% of kids with higher socioeconomic status, or SES, had gotten a college degree by age 28 – only 4% of low
SES kids had. Those born better off were also more likely to be middle class at age 28. And these unequal outcomes
were heightened for African American kids. Low SES white kids ended up better off than low SES Black kids. 89% of
white high school dropouts were working at age 22 compared to only 40% of black high school dropouts. And contrary
to what The Wire might have made you think about inner-city Baltimore lifestyles – these differences can’t be explained
away by differences in criminal behavior or drug use. Low SES White men were more likely to use hard drugs, smoke,
and binge drink than low SES Black men. And holding all else constant, a police record was more of an impediment to
getting a job for African American men than White men.
Thanks Thought Bubble.

So, the impacts of where you’re born on the social ladder can have far reaching consequences. And in addition to race,
social mobility can also vary by gender. Over the last half century, women as a whole have experienced absolute
mobility – 85% of women earn higher wages than their mothers did. And the income gap between men and women has
narrowed significantly. In 1980, the average income for a woman was 60% that of men, whereas by 2015 that gap was
8%.
But despite the great strides over the last half century, there are still gaps in opportunity for women. Women born at
the bottom of the social class ladder are more likely to remain there than men – about half of women born in the
bottom quintile are still there at age 40 compared to only about one-third of men. Also, women born at the bottom
experience more downward mobility than men, with more women than men having family incomes lower than that of
their parents.
Some of these differences by gender may be because women are much more likely to head up single parent homes than
men are. Being married is a huge plus for social mobility, because two incomes are better than one.
People who marry tend to accumulate wealth much faster than those who are single, making it easier to ascend the
social ladder. Modern-day Cinderella doesn’t just move up the social ladder by marrying the prince, she’s also more
likely to build a solid 401K and stock portfolio, key sources of wealth. As we’ve seen, social class mobility depends on
where you start and who you are.

So let’s go back to the question we asked at the beginning is America the land of opportunity? If you’re a glass half full
kind of person, you might think so based on some of what we’ve talked about today. After all, most people are better off
than past generations were. Accounting for inflation, about three times as many Americans make incomes above
$100,000 now than did in 1967. But not all groups have benefitted equally from this economic growth – your chance at
upward mobility can vary a lot by education or race or gender, or where you start on the income distribution.
For those in the middle of the income distribution, earnings growth has stalled for many workers, but the costs of
necessities like healthcare or housing have climbed ever higher. Manufacturing, an industry that historically provided
stable jobs and decent pay to less-educated workers, has been declining for a while now and was particularly hard hit by
the recession from 2007 to 2009. In the wake of this decline, most of the jobs available for less-educated workers tend
to be low-paying service industry jobs, contributing to lower absolute mobility than we’ve seen in the past. All of these
patterns, plus the growing income inequality we talked about a couple episodes ago, mean that the rungs of the social
mobility ladder in the United States seem to be getting harder to climb.
Deviance

A person holding up a convenience store and a pacifist at a protest might seem like polar opposites. But they actually
have something in common. So do an American vegan preparing a meal at home, and a white-collar criminal committing
tax fraud, and a runaway slave. They're all social deviants. We've spent a lot of time so far talking about how society fits
together, and how it functions. But we can’t cover that in any meaningful way without also talking about the people who
don't fit. We have to talk about who’s normal and who’s deviant...and how they get to be that way.

Now, you might think that calling pacifists and vegans and runaway slaves deviant is...rude, but in sociology, deviance
isn't an insult. Deviance simply means being non-normative. Different. So while this does include some things that we
might think of as bad or harmful, like, crime, it also includes things we might just think of as outside the mainstream. So
if eating a burger is a traditional "all-American" cultural activity, then being vegan in America is deviant. But there's
something important to notice here. I didn't say being vegan in a society where most people eat meat is deviant,
because deviance is not just a matter of numbers.

Deviance is anything that deviates from what people generally accept as normal. For instance, red hair is statistically
uncommon, but it’s not considered deviant. Dying your hair bright purple – that is deviant and might earn you some
strange looks from some people. And strange looks from strangers are a form of social control, attempts by society to
regulate people's thoughts and behaviors in ways that limit, or punish, deviance. Specifically, the strange looks are what
are known as negative sanctions, negative social reactions to deviance. The opposite, naturally, are positive sanctions –
affirmative reactions, usually in response to conformity. Once you start looking, you begin to see forms of social control,
both positive and negative, everywhere: a friend making fun of your taste in food or a teacher congratulating you on a
good paper. Or someone commenting loudly on your bright purple hair. Sanctions all. These are all examples of informal
norms, or what sociologists call folkways. You won’t be arrested for violating a folkway, but breaking them usually
results in negative sanctions. But not all norm violations are informally sanctioned. Formal sanctioning of deviance
occurs when norms are codified into law, and violation almost always results in negative sanctions from the criminal
justice system – the police, the courts, and the prison system. So given the power of formal sanctions, why does anyone
do deviant things?
This is a big question.
Before we get to the sociological perspective, we need to mention some of the biological and psychological views of
deviance that have been influential in the past. Spoiler alert: Historically, these explanations have been insufficient in
helping us understand non-normative behavior. For example, the earliest attempts at scientific explanations for
deviance, and crime in particular, are biologically essentialist explanations. They were based on the idea that something
about a person's essential biology made them deviant.

In 1876, Cesare Lombroso, an Italian physician, theorized that criminals were basically subhuman, throwbacks to a more
primitive version of humanity. He went so far as to suggest that deviants could be singled out based on physical
characteristics, like a low forehead, stocky build, and prominent jaw and cheekbones, all of which he saw as reminiscent
of our primate cousins. Another scientist, U.S. psychologist William Sheldon, also found a relationship between general
body type and criminality. In the 1940s and ‘50s, he studied body types and behavior and concluded that men who were
more muscular and athletic were more likely to be criminally deviant. We know today that the idea that physical
features somehow correspond to criminality is just no...it’s wrong.
But later work by Eleanor and Sheldon Glueck appeared to confirm William Sheldon’s basic findings on male muscularity
and criminal aggression. However, they refused to ascribe their results to a biological explanation. They countered that a
simple correlation between body type and criminality could not be taken as causal evidence.
Instead, they argued this was an example of a self-fulfilling prophecy: People expect physically strong boys to be bullies,
and so they encourage aggressive behavior in such boys.

Large boys who have their bullying behavior positively sanctioned are encouraged to continue being aggressive,
and some eventually grow up and engage in aggressive criminal behaviors.
Psychological approaches, by contrast, place almost all the explanatory power in a person’s environment.
While some elements of personality may be inherited, psychologists generally see personality as a matter of
socialization.
So they see deviance as a matter of improper or failed socialization.
A classic example of this strain of psychological explanation is found in the 1967 work of Walter Reckless and Simon
Dinitz. They studied boys who lived in an urban neighborhood known for its high rate of delinquency. Using the
assessment of the boys’ teachers, they grouped the youths into "good boys" and "bad boys," and then interviewed them
to construct psychological profiles. They found that the so-called "good boys" had a strong conscience, were good at
coping with frustration, and identified with conventional cultural norms. The "bad boys," on the other hand, were the
opposite on all counts. Following the boys over time, Reckless and Dinitz found that the "good boys" had fewer run-ins
with the police. And they attributed this to the boys’ ability to control deviant impulses. This idea that deviance is
essentially a matter of impulse control is called containment theory, or having a personality that contains deviant
actions. And containment theory has received support in recent research, including a 2011 study on 500 male fraternal
twins that assessed their self-control, resilience, and ability to delay gratification.

Researchers found that the brother who scored lower on these measures in childhood was more likely to be criminally
deviant in adulthood. Now, while we've seen that there's clearly value in both biological and psychological approaches,
they’re each also fundamentally limited. For example, both kinds of explanations link criminal deviance to individual
factors – either of body or of mind – while leaving out other important factors, like peer influence or what opportunities
for deviance different people might be exposed to.

Plus, biological and psychological explanations only understand deviance as a matter of abnormality. Both approaches
begin by looking for physical or mental irregularities, whereas more recent research suggests that most people who do
deviant things are both biologically and psychologically normal – or, to use a better word, let’s say: typical.

Finally, neither biology nor psychology can answer the question of why the things that are deviant are considered
deviant in the first place. Even if you could 100% prove that a certain abnormality caused people to be violent, not all
violence is considered a form of deviance. Think boxing. And here's where we can turn to a sociological approach, which
sees deviance and criminality as the result of how society is structured. And here, the approach is based on three major
ideas.
First is the idea that deviance varies according to cultural norms. In other words, nothing is inherently deviant: Cultural
norms vary from culture to culture, and over time and place, so what’s deviant now might have once been quite normal.
Slavery is an obvious example. Not only was race-based slavery normal in 19th century America, rejecting it was
considered deviant. So deviant, in fact, that physician Samuel Cartwright wrote about a disorder he called drapetomania
to explain the supposed mental disorder that caused slaves to flee captivity.

The second major principle sociologists draw on is the idea that people are deviant because they’re labeled as deviant.
What I mean here is that it’s society's response that defines us, or our actions, as deviant. The same action can be
deviant or not, depending on the context: Sleeping in a tent in a public place can be illegal, or it can be a fun weekend
activity, depending on where you do it. And, as the Gluecks argued, labeling people can become a self-fulfilling
prophecy:
When society treats you as a deviant, it’s very easy to become one.

Deviance doesn't even necessarily require action. Simply being a member of a group can classify you as a deviant in the
eyes of society. The rich may view the poor with disdain for imagined moral failures, or we can return again to racism
and slavery, which imagined African Americans as deviant by nature.
And the last major sociological principle for understanding deviance is the idea that defining social norms involves social
power. The law is many things, but Karl Marx argued that one of its roles is as a means for the powerful elite to protect
their own interests. This is obvious in the case of something like fugitive slave laws, which applied a formal negative
sanction to deviating from the norms of slavery. But we can also see it in things like the difference between a campaign
rally and a spontaneous protest. Both are public political speech, and both may block traffic, but they draw resoundingly
different reactions from police. So these are three foundational ideas about the sociological perspective on deviance.
But I want to stress that they only begin to define a perspective.
Sociology clearly understands deviance in a different way than biology and psychology do, but if you really want to dive
into more detailed sociological explanations, you'll need to wait until next week, when we look at the major theoretical
explanations for crime and deviance.
Social Groups

“If all your friends jumped off a bridge, would you jump too?" It’s the lament of many an exasperated parent, but it’s
also a kind of profound sociological question. Because, when you're talking to your parents, the answer's always no. But,
with the right group of friends, you might be quite happy to take a dive in the water. The thing is, you're a different
person when you're a part of a group, and you're a different person in different groups. A family, a group of friends out
for a swim, a business meeting, and a choir are different kinds of groups. And the same person can be a member of all of
them. So if we want to understand how these groups are different, and even how they're similar, we need to talk about
what social groups are, and why they matter, both to the people who are a part of them, and to the people who aren't.

The choir, the meeting, the friends, and the family are all examples of social groups. A social group is simply a collection
of people who have something in common and who believe that what they have in common is significant. In other
words, a group is partly defined by the fact that its members feel like they're part of a group. This is obviously a pretty
broad definition. But it does have its limits, and you can see these limits if you compare social groups to aggregates and
categories. An aggregate is a set of individuals who happen to be in the same place at the same time. All the people
passing through Grand Central Station at 1:00 on a Friday afternoon are an aggregate, but they aren't a group, because
they don't share a sense of belonging. Categories, meanwhile, consist of one particular kind of person across time and
space. They’re sets of people who share similar characteristics. Racial categories are a simple example.
So the sense of feeling like you belong to a group is a defining feature of a group. But it also helps you differentiate kinds
of groups, specifically between primary and secondary groups. Primary groups are small and tightly knit, bound by a very
strong sense of belonging. Family and friendship groups are primary groups. They’re mutually supportive places where
members can turn for emotional, social, and financial help. And as far as group members are concerned, the group is an
end-in-itself. It exists to be a group, not for any other purpose.

Secondary groups, however, are the reverse. These are large and impersonal groups, whose members are bound
primarily by a shared goal or activity, rather than by strong emotional ties. A company is a good example of a secondary
group: Employees are often loosely or formally connected to one another through their jobs, and they tend to know
little about each other. So there’s a sense of belonging there, but it's much more limited. That's not to say that
coworkers never have emotional relationships. In fact, secondary groups can become primary groups over time, as a set
of coworkers spends time together and becomes a primary group of friends. And while a gang of friends and a company
clearly have a lot of differences, they also have at least one major similarity: They're both voluntary – if you belong to
that group, it’s because you choose to join. But there are also plenty of involuntary groups, in which membership is
assigned.
Prisoners in a prison are members of an involuntary group, as are conscripted soldiers. Now that we understand a little
bit about what groups are, we can start to study how they work – beginning with group dynamics, or the way that
individuals affect groups, and groups affect individuals.
If we want to think about how individuals affect groups, a good place to start is with leadership.
Not all groups have formally assigned leaders, but even groups that don't, often have de facto leaders, like parents in a
family.

A leader is just someone who influences other people in the group. And there are generally two types of leadership: an
instrumental leader is focused on a group's goals, giving orders and making plans in order to achieve those goals.
An expressive leader, by contrast, is looking to increase harmony and minimize conflict within the group. They aren't
focused on any particular goal, they’re just trying to promote the wellbeing of the group’s members. And just as leaders
may differ in what they’re trying to do, so too can they go about doing it in different ways. I’m talking here about
leadership styles, of which we have three. Authoritarian leaders lead by giving orders and setting down rules which they
expect the group to follow. Such a leader earns respect, and can be effective in a crisis, but at the expense of affection
from group members. Democratic leaders on the other hand, lead by trying to reach a consensus. Instead of issuing
orders, they consider all viewpoints to try and reach a decision. Such leaders are less effective during a crisis, but,
because of the variety of different viewpoints they consider, they often find more creative solutions to problems. And
they’re more likely to receive affection from their group’s members.

Finally, laissez-faire leaders do the least leading. They’re extremely permissive, and mostly leave the group to function
on its own. This means lots of freedom, but it’s the least effective style at promoting group solidarity and least effective
in times of crisis. So, leadership is one way that individuals affect groups, but groups also affect individuals.
You can see this especially clearly in group conformity, where members of a group hew to the group’s norms and
standards.

Basically, group conformity is the reason that you do jump off the bridge with your friends. And this has been
demonstrated in some fascinating experimental results.

Let’s go to the Thought Bubble to learn about perhaps the most famous – or infamous – experiment on conformity.
The Milgram Experiment was run by American psychologist Stanley Milgram in 1974, and it was presented as an
experiment in punishment and learning, with two participants. One participant was the teacher, who read aloud a series
of word pairs and then asked the other participant, the student, seated in another room, to recall them. The student
was strapped to a chair and wired up with electrodes. For each wrong answer, the experimenter, who was standing
beside the teacher, instructed the teacher to deliver a painful electric shock to the student. With each wrong answer,
the intensity increased, from an unpleasant few volts up to 450 volts, a potentially deadly shock. But the experiment was
not about punishment or learning. The student was actually an actor, a confederate of the experimenter, and the shocks
were not real. The experiment was designed to test how far the teacher would go in conforming to authority. At some
point in the experiment, the confederate would feign extreme pain and beg the teacher to stop. Then he fell silent. If at
any point the teacher refused to issue the shock, the experimenter would insist that he continue. In the end, 65% of
participants went all the way, administering the presumably deadly 450 volt shock. And this is usually given as proof that
people tend to follow orders, but there’s a lot more to it than that. If the experimenter gave direct orders to the teacher,
like “You must continue, you have no other choice,” that resulted in non-compliance. That’s when the teacher was more
likely to refuse. The prods that did produce compliance were the ones that appealed, instead, to the value of the
experiment – the ones that said administering the shocks was necessary for the experiment to be successful and
worthwhile. So in this instance, the value of the experiment, of science, was a strongly held group value, and it helped
convince the subjects to continue, even though they might not have wanted to.
Thanks, Thought Bubble.

This idea of group values points us to another important concept in understanding conformity: the idea of groupthink.
Groupthink is the narrowing of thought in a group, by which its members come to believe that there is only one possible
correct answer. Moreover, in a groupthink mentality, to even suggest alternatives is a sign of disloyalty to the group.
Another way of understanding group conformity is to think about reference groups. Reference groups are groups we use
as standards to judge ourselves and others. What’s "normal" for you is determined partly by your reference groups.
In-groups are reference groups that you feel loyalty to, and that you identify with. But you can compare yourself to out-
groups, too, which are groups that you feel antagonism toward, and which you don't identify with. And another aspect
of a social group that can affect its impacts and dynamics is its size. And here, the general rule is: the larger the group,
the more stable, but less intimate, it is. A group of two people is obviously the smallest and most intimate kind of group,
but it’s also the least stable. Because, if one person leaves, there’s no group anymore.

Larger groups are more stable, and if there are disagreements among members, other members are around who can
mediate between them. But big groups also are prone to coalitions forming within them, which can result with one
faction aligning against another. The size of a group matters in other ways, too, for instance in terms of social diversity.
Larger homogenous groups tend to turn inward, concentrating relationships within the group instead of relying on
intergroup contacts. By contrast, heterogenous groups, or groups that have more diversity within them, turn outward,
with its members more likely to interact with outsiders.

Finally, it’s worth pointing out that social groups aren’t just separate clumps of people. There's another way to
understand groups, in terms of social networks. This perspective sees people as nodes that are all socially
interconnected. You can imagine a "circle of friends" who are all connected to each other in different ways, some with
strong connections in a clique or subgroup, while some are connected by much weaker ties. And you can follow the ties
between all of the nodes outward, to friends-of-friends and acquaintances who exist on the periphery of the network.
Networks are important, because even their weak ties can be useful. Think of the last time you were networking,
following every connection you had to, say, land a job interview. Regardless of whether you think about groups as
networks and ties, or as bounded sets, it's clear that they have important impacts on people, both inside and outside. If
you just looked at society as a bunch of individuals, you’d miss all the ways that groups impact our lives – by acting as
reference groups, by influencing our decisions through group conformity, and much more. And groups are important for
how society itself is organized.
Socialization

What do you, as you’re watching me right now, have in common with a toddler who’s being read a bedtime story? I’ll
give you a clue. It’s also something you have in common with the kids in The Breakfast Club. As well as with a soldier
going through boot camp. Give up? You’re all being socialized. It’s also the title of the episode. You probably saw that.
Each of us is surrounded by people, and those people become a part of how we act and what we value. This is known as
socialization: the social process through which we develop our personalities and human potential and learn about our
society and culture. Last time, we talked about the HOW of socialization, how we learn about the social world. And no
matter which of the many theories out there that you like best, the answer seems to be that we’re socialized by
interacting with other people. But which people? What we didn’t talk about last week was the WHO of socialization:
Who do we learn about the social world from? What people, and what institutions, have made you who you are today?

Socialization is a life-long process, and it begins in our families. Mom, Dad, grandparents, siblings – whoever you’re living
with is pretty much your entire social world when you’re very young. And that’s important, because your family is the
source of what’s known as primary socialization – your first experiences with language, values, beliefs, behaviors, and
norms of your society. Parents and guardians are your first teachers of everything – from the small stuff like how to
brush your teeth to the big stuff like sex, religion, the law, and politics. The games they play with you, the books they
read, the toys they buy for you, all provide you with what French sociologist Pierre Bourdieu called cultural capital – the
non-financial assets that help people succeed in the world. Some of this cultural capital may seem fairly innocuous – I
mean, is reading Goodnight Moon really making that big of an impact on a toddler?
Yes, actually.
It teaches the “value” of reading as much as it helps the child begin to recognize written language. The presence of
books in the home is associated with children doing well in school. Another important form of socialization that starts in
the home is gender socialization, learning the psychological and social traits associated with a person’s sex. Gender
socialization starts from the moment that parents decide on a gendered name and when nurses put a pink or a blue hat
on the baby.
Other group memberships, like race and class, are important parts of initial socialization. Race socialization is the
process through which children learn the behaviors, values, and attitudes associated with racial groups.

Racial discrimination is partly the result of what parents teach their children about members of other races. And class
socialization teaches the norms, values, traits, and behaviors you develop based on the social class you’re in. This may
help explain why more middle- and upper-class children go to college. Not only can the families afford to send them, but
these children are expected to attend. They grow up in a home that normalizes college attendance. Now, gender, race,
and class socialization are all examples of anticipatory socialization – that’s the social process where people learn to take
on the values and standards of groups that they plan to join . Small children anticipate becoming adults, for example,
and they learn to play the part by watching their parents.

Gender socialization teaches boys to “be a man” and girls to “be a woman”. But children also learn through secondary
socialization – that’s the process through which children become socialized outside the home, within society at large.
This often starts with school. Schools are often kids’ first introduction to things like bureaucracies, as well as systems of
rules that require them to be in certain places at certain times, or act in ways that may be different from what they
learned at home. Not only do schools teach us the three r’s – reading, ‘riting and ‘rithmetic – but they come with what
sociologists call a hidden curriculum – that is, an education in norms, values, and beliefs that are passed along through
schooling. Take, for example, a spelling bee. Its main goal is to teach literacy and encourage kids to learn how to spell.
But something as seemingly benign as a spelling bee can have many hidden lessons that stick with kids, too.

For example, it teaches them that doing better than their peers is rewarding – and it enforces the idea that the world
has winners and losers. Another hidden curriculum of school in general is to expose kids to a variety of people.
When your only socialization is your family, you just get one perspective on race, class, religion, politics, et cetera.
But once you go out into the world, you meet many people from many backgrounds, teaching you about race and
ethnicity, social class, disability, gender and sexuality, and more. School becomes not just a classroom for academic
subjects, but also for learning about different kinds of people. And, of course, schools are also where kids are exposed to
one of the most defining aspects of school-age life: peer groups.

Peer groups are social groups whose members have interests, social position, and usually age in common. As you get
older, your peer group has a massive impact on the socialization process.

Let’s go to the Thought Bubble to to see just how big that impact can be. In the late 1950s, American sociologist James
Coleman began studying teenagers – how they interacted and how their social lives affected their education. He
interviewed teens in 11 high schools in the Midwest, asking them questions about what social group they identified with
and who else they considered members of their group. Based on these interviews, Coleman identified four main social
categories. And, uh, the names of these categories will probably sound familiar to you: They were nerds, jocks, leading
crowd, and burnouts. Basically, he discovered the 1950s version of The Breakfast Club.
And with these social categories came social prescriptions – behaviors that were expected of people in those groups.
Coleman found that certain things were important to the members of certain groups, like being a good dancer or
smoking or having money or getting good grades. He also tested the students’ IQs and assessed their grades.
And surprise! It turned out that who you hung out with affected how well you did in school.
In some of the schools, getting good grades was considered an important criterion for the “leading group” – aka the
popular kids– but in other schools, it wasn’t. And in the schools where good grades were not a sign of popularity,
students who scored high on IQ tests actually did worse on their exams than similarly smart students at schools where
good grades made you popular.
Thanks Thought Bubble!

Now, Coleman’s study might seem like common sense – of course you and your friends are gonna be pretty similar.
Don’t we choose to be friends with people who are like us? Well, not entirely. Coleman’s study showed that we don’t
just pick peer groups that fit into our existing traits – instead, peer groups help mold what traits we end up with. OK, so
far, we have family, schools, and peers as the main forces that influence someone’s socialization. But what about me?
Yes, me, Nicole Sweeney. Am I part of your socialization? Or more precisely, are youtube videos considered a form of
socialization? Short answer: yes! Long answer: The media you consume are absolutely a part of your socialization.
TV and the internet are huge parts of Americans’ lives. And how we consume our media is affected by social traits, like
class, race, and age. A teenager or twenty-something in 2017 is much more likely to watch online media, like Netflix or
youtube, than television. And low-income Americans watch much more TV than their higher-income counterparts.
The media we consume also impact us dramatically.

The American Academy of Pediatrics, for example, has said there are connections between excessive television viewing
in early childhood and cognitive, language and social emotional delays. But TV can also influence the attitudes of
viewers, especially young ones. For example, studies have found that kids exposed to Sesame Street in randomized-
controlled trial settings, reported more positive attitudes toward people of different races – most likely a result of the
program’s wide variety of characters from different racial and ethnic backgrounds. TV also affects us well beyond
childhood. One recent study found that MTV’s “16 and Pregnant” may have acted as a cautionary tale, helping to change
teen girls’ attitudes toward birth control and contributing to declining rates of teen pregnancy. So far, the types of
socialization we’ve talked about have been fairly subtle — but there are also more intense types of socialization. Total
institutions are places where people are completely cut off from the outside world, and face strict rules for how they
must behave.

First coined by sociologist Erving Goffman, the term “total institution” refers to places like the military, prisons, boarding
schools, or psychiatric institutions that control all aspects of their residents’ lives – how they dress, how they speak,
where they eat, where they sleep. And in a total institution, residents undergo resocialization, where their environment
is carefully controlled to encourage them to develop a new set of norms, values, or beliefs. They do this by, basically,
breaking down your existing identity and then using rewards and punishment to build up a whole new you. Think about
every boot camp movie you’ve ever seen. All soldiers are given the same haircut and uniform, expected to reply to
questions in the same way, put through the same grueling exercises, and humiliated by the same officer.
This process re-socializes the soldiers to put extreme value on their identity within the group, making them more willing
to value self-sacrifice if their unit is in danger. So whether you’re GI Jane training for a reconnaissance team or Molly
Ringwald trying to maintain her queen-bee status in the leading crowd, the you that you are has been powerfully shaped
by people and institutions. Now, think back on your own life – who has been the biggest influence on YOUR
socialization?
Who do you think that you yourself have influenced? Hard questions to answer, maybe, but definitely worthwhile – and
hopefully a little easier now that you’ve learned how sociologists think about it.
Social Interaction and Performance

You're daydreaming in class when the teacher calls on you and asks you a question. You don't know the answer, so you
look desperately around the room for help. Finally, one of your classmates whispers it to you. So you say the answer,
and the moment of terror is over. You go back to daydreaming, the teacher goes back to teaching, and everyone's
happy.
A lot of stuff just happened there. Stuff that raises many questions. Like, why are you worried about giving the right
answer? Why are you worried about answering the question at all? And why does your classmate help you out, when it
could get them in trouble? If you want the right answers to these questions, we need to talk about social interaction.
And we also need to talk about reality. Because, according to some sociological theories, the reality of your social world
– in your classroom and beyond – is basically a huge, life-long stage play.

Social interaction is simply the process by which people act and react in relation to others. Whenever people converse,
or yell, or fight, or play sports, that’s social interaction. And any place you find social interaction, you're going to find
social structure. Social structure consists of the relationships among people and groups. And this structure gives
direction to, and sets limits on, our behavior. Because our relationships establish certain expectations of everyone
involved, depending on the social setting.
This is really obvious in a classroom: The teacher teaches and the students learn, because that’s the expectation for that
relationship, in that setting. But if you run into a teacher, say, at the mall, you both behave differently – and probably
awkwardly – because the expectations for your interaction in that social setting have changed. Now, this still doesn't tell
us why these relationships work the way they do. But it does tell us where to look.
If our interactions are a matter of expectations, then we need to understand how those expectations are set, and for
that we need to talk about social status.

Status is a position that a person occupies in a society or social group. It's part of their identity, and it defines their
relationships with other people. So, the status of “teacher” defines how a teacher should relate to their students.
But statuses aren't just professions: gender, race, and sexual orientation are all social statuses, as are being a father, or a
child, or a citizen. And all the statuses held by a single person make up that person's status set. That status set can tell us
a lot about a person, because statuses exist in a hierarchy, with some statuses being more valued than others.
So if I tell you that someone is a white middle aged male CEO, then you can make some pretty reasonable guesses about
his education, wealth, and the power he holds in society.
And you've probably noticed that there are different kinds of statuses; for example "white," "middle-aged," and "male"
are pretty different from the status of "CEO." The first three are all ascribed statuses. Ascribed statuses are those in
which a person has no choice; they're either assigned at birth or assigned involuntarily later in life. Race, for instance, is
an ascribed status assigned at birth, while the ascribed status of “middle-aged” happens at a point later in life.
CEO, on the other hand, is an achieved status – it’s earned, accomplished, or obtained with at least some effort on the
person’s part. Professions, then, are achieved statuses. So is being a student, or a parent. Beyond this difference, there’s
also the fact that some statuses are more important than others.

A master status is the status others are most likely to use to identify you. This can be achieved, like “professor,” or
ascribed, like “cancer patient.” And as that example shows you, a master status doesn’t need to be positive or desirable.
In fact, it doesn’t even need to be important to the person who holds it. It just needs to be important to other people,
who use the status as their primary way of locating that person in the social hierarchy. Also, statuses tend to clump
together in certain ways. Most CEOs are college educated, for example, but they aren’t always. And a mismatch or
contradiction between statuses is called a status inconsistency. When we talk about PhD students working as baristas,
we’re really bringing it up in that way, because there's a status inconsistency between PhD and barista. At least in the
industrialized world, service workers aren't "supposed" to be highly educated. Now that's all very interesting, you might
say, but we still haven't said that much about social interaction. And you're right: status gets us started, but if we want
to get into how people behave, then we need to talk about roles. If status is a social position, then roles are the sets of
behaviors, obligations, and privileges that go with that status. So a person holds a status, but they perform a role. Keep
that word in mind: Perform.

Now, since a person can have multiple statuses, they can have multiple roles too. But a single status often has multiple
roles that go with it. For example, a teacher's role in the classroom is to teach and lead students. But in the faculty
lounge, the status of teacher has another role; acting as a colleague to other teachers, or as an employee to the principal
– roles that require a whole different bunch of behaviors than those found in the classroom. But all of the roles attached
to the single status of “teacher” make up that status' role set.
All statuses have role sets. And various role sets can sometimes demand contradictory behaviors of the person who
holds that set. When the roles attached to different statuses create clashing demands, that’s known as role conflict.
Parents who work, for instance, often need to decide between the demands of their jobs and the demands of their
families, which can lead to role conflict. And even the roles within a single status can create contradiction, in what we
call role strain. A student who has responsibilities for class, but also for basketball, and orchestra, and the yearbook
committee, experiences role strain as they try to balance the competing obligations of these roles, all within the context
of their status as a student.

Now sometimes, whether it’s because of conflict, strain, or other reasons, people just disengage from a certain role, in a
process called role exit. This can be voluntary, like quitting your job, or involuntary, like getting dumped. In either case,
it's rarely as simple as just walking out the door, because roles are a part of who we are. So exiting a role can be
traumatic, especially without preparation, or if the exit isn't by choice. Now, we've been talking about roles as though
they’re prescriptive, or that they totally determine our behavior. But they don't!

Roles are guidelines, expectations that we have for ourselves and that others place on us. We may or may not internalize
those expectations, but even if we do, our behavior still isn’t completely controlled. But why do statuses come bundled
with roles in the first place? Why can't I just not perform my role? The answer is complicated, but part of it is that, well,
reality itself is socially constructed. I mean, there's nothing in the laws of physics that says that some people are
teachers, and that those people get to ask questions, and students have to answer them. But that doesn't mean these
statuses aren't real and don't have real roles attached to them. One good way of thinking about this is known as the
Thomas Theorem, developed by early 20th century American sociologists William Thomas and Dorothy Thomas.

It states, "If people define situations as real, they are real in their consequences." In other words, statuses and roles
matter, because we say they do. The perception creates the reality. So the reason you can't just not perform your role is
that, even if you don't think it matters, everyone else does think it matters! So the student who refuses to answer a
question gets in trouble, while the teacher who refuses to teach and just hangs out drinking wine with their feet up on
their desk gets fired. If you have the status of a teacher, people expect, even demand, that you do the things teachers
are expected to do. How you feel about your status doesn’t really enter into it. And we know who's a teacher and who's
a student based on our background assumptions, our experiences, and the socialization that teaches us about norms in
various situations.
So, this is how your reality becomes socially constructed – you, and everyone around you, uses assumptions and
experiences to define what’s real. By interacting with the people around you, and expecting certain behaviors in the
context of roles, you actually create the social reality that shapes those interactions that you’re having.
The fact that this happens in interaction is really important, because your social reality is not just about you. It's about
everyone you're interacting with, and their expectations, too. It's about maintaining a performance.
And this idea of performance is really central to a sociological understanding of how people interact.
It’s the key to what’s known as the dramaturgical analysis of social interaction.
This approach, pioneered by Canadian-American sociologist Erving Goffman, understands social interaction as if it were
a play performed on stage for an audience. By Goffman’s thinking, people literally perform roles for each other, and the
point of social interaction is always – at least partly – to maintain a successful interaction that’s in line with expectations.
That is, to satisfy the audience. In order to do this, people need to carefully control the information others receive about
them, in a process called impression management. Like, if you're out on a first date, you’re not gonna talk about how
your last relationship ended, because you don't want to create a bad impression. But impression management isn't
merely a matter of what you say and don't say. It's also a matter of what you wear and what you do. That is to say, it's a
matter of what Goffman referred to as props and nonverbal communication.
Props, as you know, are just objects that performers use to help them make a certain impression:
So if you want to look professional, you wear a suit. If you want to look studious, make sure you're reading a book.
And the setting can be a prop too: Being the one standing at the front of the classroom is like 50% of what it takes to
look like a teacher.
And nonverbal communication includes body language – like standing up straight in order to look respectable, and
maintaining or averting eye contact – as well as gestures, like waving hello to your friend. Together, props and nonverbal
communication are both examples of what Goffman called sign vehicles: things we use to help convey impressions to
people we interact with.

Those vehicles are important aspects of the performance, but really the most fundamental distinction is the one
between what’s part of the performance and what isn't – in other words, what the audience sees, and what they don't.
Goffman called this frontstage and backstage. Frontstage is where the audience is and where the performance happens,
while backstage is where the performer can drop the performance and prepare. Often the things we do backstage would
totally ruin the performance we're trying to maintain frontstage.

A teacher cursing floridly while grading papers would be considered backstage: important preparation for teaching is
happening, but if any of her students – that is, the audience – saw her, it would totally ruin the performance, because it
defies expectations of how teachers are supposed to act. And not all performances are one-person shows.
The students, for instance, are all on what Goffman calls a team; they’re all working together to give a performance
collectively for the teacher. This doesn't mean they're all friends, or that they even like each other. It just means that
they all need to work together to pull it off the show of being a good, attentive class. This is why your classmate
whispers the answer to you: They’re helping you maintain the class’s performance of attentiveness by acting as a
teammate.
And the teacher goes on teaching. It's important to understand that, in Goffman’s analysis, the performances that
everyone does all the time aren't necessarily adversarial: The students perform for the teacher, and the teacher
performs for the students, but everyone involved wants the performance to go smoothly.
You may not ever win an Oscar.
But according to dramaturgical analysis, your social interactions are where your statuses, roles, and all of the
expectations that they entail, come together for you to give, literally, the performance of your life. And that
performance is the stuff of social reality.
Prejudice and Discrimination

In February 1999, four New York City police officers were on patrol in the Bronx when they saw a young black man
standing on a stoop. They thought he looked suspicious. When they pulled over, he retreated into the doorway and
began digging in his pocket. He kept digging as the police shouted at him to show his hands; a few seconds later, the
man, Amadou Diallo, a 23-year-old immigrant from Guinea, was dead, hit by 19 of the 41 bullets that the police fired at
him. What Diallo was reaching for was his wallet. He was going for his ID as he stood on the steps of his own apartment
building. Diallo's story, and the officer's fatal pre-judgment of him, is recounted in Malcolm Gladwell's 2005 bestseller
Blink. Gladwell, and the social psychologists whose work he draws upon, explores Diallo's case as an example of that
grey area between deliberate violence and an accident, propagated by non-conscious, or implicit biases. The officers did
discriminate against Diallo, but the prejudice they acted on may have been driven by something more subtle than simple
hatred.And that's an important thing to think about. Yes, there are lots of overtly bigoted people and policies at work all
over the world, but what we're interested in today is the more insidious, non-conscious automatic bias, and how it can
affect our behavior. The fact is, our implicit biases affect the way we relate to others in a very real way.

Our race, gender, age, religion, or sexual orientation can make the difference between whether we get a job or not, a
fair paycheck, or a good rental, or whether we get randomly pulled over or shot and killed for reaching for a wallet. In
the last two episodes, we've examined how we think about and how we influence one another, but social psychology is
also about how we relate to one another. Like what factors might cause us to help another person, or harm them, or
fear them? What are the social, and cognitive, and emotional roots of prejudice, racism, and sexism, and how do they
shape our society? These are some of the aspects of ourselves that are the hardest and most uncomfortable for us to
explore, which is why they're so important to understand. We've all been unfairly judged in our time, and let's not
pretend that we haven't done our fair share of uninformed judging too.

Like it or not, prejudice is a common human condition. Prejudice just means "prejudgment." It's an unjustified, typically
negative attitude toward an individual or group. Prejudicial attitudes are often directed along the lines of gender, ethnic,
socioeconomic status, or culture, and by definition, prejudice is not the same thing as stereotyping or discrimination,
although the three phenomena are intimately related. People may distrust a female mechanic. That's a prejudicial
attitude, but it's rooted in a stereotype, or over-generalized belief about a particular group.
Although it's often discussed in a negative way, stereotyping is really more of a general cognitive process that doesn't
have to be negative. It can even be accurate at times. Like, I have the stereotype that all crows have wings, injuries and
birth defects aside. And that happens to be true. But on the negative end, your prejudice against female mechanics may
be rooted in some inaccurate stereotype about women's skills with a socket wrench. And when stereotypical beliefs
combine with prejudicial attitudes and emotions, like fear and hostility, they can drive the behavior we call
discrimination.
So a prejudiced person won't necessarily act on their attitude. Say you believe in the stereotype that overweight people
are lazy. You might then feel a prejudiced distaste when you see someone who appears overweight. But if you act on
your prejudice, and, say, refuse to hire them for a job or don't let them sit at your lunch counter, then you've crossed
over into discriminating against them.

The former apartheid system of racial segregation in South Africa, the Nazis' mass killing of Gypsies, Jewish people, and
other groups, and centuries of bloodshed between Protestants and Catholics, are all extreme examples of violent
prejudice and discrimination. The good news is that in many cultures, certain forms of overt prejudice have waned over
time.
For example, in 1937 only 1/3 of Americans said that they'd vote for a qualified woman to be president, while in 2007,
that figure was up to nearly 90 percent. But of course more subtle prejudices can still linger. In the past, we've talked
about dual-process theories of thought, memories, and attitudes, and that while we're aware of our explicit thoughts, or
implicit cognition still operates under the radar, leaving us clueless about its effect on our attitudes and behavior. In the
same way, prejudice can be non-conscious and automatic. And I mean it can be so non-conscious that even when people
ask us point-blank about our attitudes, we unwillingly or unknowingly don't always give them an honest answer. Do you
think that men are better at science the women? Or that Muslims are more violent than Christians? Or that overweight
people are unhealthy? Our tendency to unwittingly doctor our answers to questions like these is why we have the
implicit association test, or IAT. The test was implemented in the late 1990s to try to gauge implicit attitudes, identities,
beliefs, and biases that people are unwilling or unable to report. You can take the IAT online and measure your implicit
attitudes in all kinds of topics, from race, religion, and gender to disability, weight, and sexuality. It's basically a timed
categorization task.

For example, the age-related IAT looks at implicit attitudes about older vs. younger people. In it, you might be shown a
series of faces, old and young, and objects, pleasant and unpleasant, like pretty flowers vs. a pile of garbage. You're then
asked to sort these pictures, so you'd press the left key if you see a young face or a pleasant object, and press the right
key if you see an old face or an unpleasant object. That's the stereotypic condition. Your keystrokes correspond to
stereotypical pairs; in this case, associating good stuff with youth and bad stuff with older age. Then the test asks you to
do the same thing in a counter-stereotypic condition, pressing the left key if you see a young face or an unpleasant
object and the right key if you see an old face or a pleasant object.

The core of the test is your reaction time. Are you faster at sorting when you're working with a stereotypical pairing than
you are with counter-stereotypical pairings? If that's the case, even though you may think you're unprejudiced, you've
got an implicit association between youth and goodness, which, as you might guess, may have some implications about
how you think and act toward older people. The test is widely used in research, and contrary to what some critics think,
it's surprisingly predictive of discriminatory behavior in all kinds of experimental settings. So that's one way to measure
subtle, implicit prejudice. But obviously, overt prejudice is far from dead. That's why discrimination studies are
prominent in social psychology research, and they can also predict, sometimes with scary accuracy, how discrimination
might show up in broad social patterns, like wage inequality and job opportunity gaps.

For instance, the 2012 Yale study led by social scientist Corinne Moss-Racusin demonstrated that science faculty across
the country systematically discriminated against female science students. In a double-blind study, a representative
sample of science faculty members were asked to hire a fictional student applicant for a lab-manager job. When the
applicant's name was Jennifer, instead of John, they viewed her as less competent, were less likely to hire her, offered
her less money, and were less likely to mentor her. And this prejudice was even exhibited by women faculty members.
And that's an important point. People on both sides of the stereotype tend to respond similarly, with the subjects of
prejudice themselves often holding the same stereotypical implicit attitudes or engaging in the same discriminatory
behavior.
So when we say that stereotypes are pervasive, we mean pervasive. Now it's all too easy to hold up examples of how
people are prejudiced, but the real root of the issue is why they are.

Here are a few possibilities: For one, prejudices can come up as a way of justifying social inequalities. This happens when
people on both sides of the power and wealth spectrum start believing that people get what they deserve, and they
deserve what they get. This is called the just-world phenomenon.

Prejudices can also be driven by the "us vs. them," or as social psychologists often call it, the ingroup-outgroup
phenomenon. Whether you're in a soccer stadium, or the political arena or school lunchroom, or, you know, in the
comments of this video, dividing the world into in-groups and out-groups definitely drives prejudice and discrimination.
But an in-group identity also gives its members the benefits of communal solidarity and a sort of safety in numbers. This
in-group bias, or tendency to favor your own group at the expense of others, is powerful, even when it's totally
irrational. One common social psychology exercise on in-group favoritism involves dividing a class into two arbitrary
groups, say, those wearing sneakers and those not wearing sneakers. Each person sits with his or her group and is told to
list differences between themselves and the opposing group.

The lists usually start out pretty tame, but become more strident as they grow longer. Eventually, you have sneaker-
wearing kids saying that they're just smarter than the people without sneakers. The kids who don't have sneakers say
that the other kids are trashy and low-class. Soon enough, each group has inflated itself and derided the opposing group,
even though the division between the two was essentially meaningless to begin with. Little exercises like this illustrate
the power of any ingroup-outgroup distinction in creating conflict between groups, and that brings us to the
psychological nature of conflict itself. History is littered with examples of how the us vs. them mentality has fueled
violence
in warfare.
Social Thinking

Question: Why do people do horrible things? Slave owners, and Nazis, any of the perpetrators of history's atrocities.
How do they so successfully dehumanize other people for so long? At a smaller scale, how do bullies in the lunchroom
manage to treat other kids with such cruelty and then go home and pet their dog and call their grandma and say "happy
birthday?" Most of what we've been studying so far has focused on the individual. We've covered sub-fields of
psychology like cognitive, personality, and clinical psychology, which tend to address the phenomena contained within a
single person's mind.
But there's also social psychology, which focuses on the power of the situation. It examines how we think about,
influence, and relate to one another in certain conditions. And it's better equipped to answer this question about people
doing horrible things. Social psychology can not only give us some of the tools we need to understand why people
behave brutally, it can also help us understand why we sometimes act heroically.

Like why did Jean Valjean reveal his true identity to save some stranger from being tried in his place? And why did Nazi
Oskar Schindler risk his own hide to save over a thousand Jewish people? What made Darth Vader throw the Emperor
down that hole, even as he was being electrocuted? I can't say there are any easy answers about humanity's greatness
or it's horribleness. Certainly, there aren't any that we can find in the next ten minutes. But we can point ourselves in
the right direction, and it starts with social thinking. When we're trying to understand why people act like villains or
heroes, one of the things we're really asking is, "Did they do what they did because of their personality? Or their
situation?" Austrian psychologist Fritz Heider began plumbing the depths of this question in the 1920s when he was
developing what's now known as the Attribution Theory.

This theory simply suggests that we can explain someone's behavior by crediting either their stable, enduring traits - also
known as their disposition - or the situation at hand. And we tend to attribute people's behavior to either one or the
other. Sounds pretty simple, but it can be surprisingly hard to tell whether someone's behavior is dispositional or
situational.
Say you see Bruno at a party and he's acting like a wallflower all night. You might assume that he just has a shy
personality. But maybe he doesn't; maybe he'd ordinarily be re-enacting all the moves from Footloose at this party but
on this night, he had a twisted ankle or a headache or he'd just seen his ex with somebody new - those are all situational
explanations.

Overestimating the forces of personality while underestimating the power of the situation is called the Fundamental
Attribution Error. And as you can imagine, making this kind of error can really end up warping your opinion of another
person and lead to false snap judgments. This might not be such a big deal when it comes to Bruno and his awesome
dance moves but according to one study of college students, 7 in 10 women report that men have misread their polite
friendliness - which would be appropriate for the situation - as a sexual come-on. We choose how we explain other
people's behavior everyday and what we choose to believe can have big consequences. For example, our political views
will likely be strongly influenced by whether we decide to attribute poverty or homelessness to personal dispositions,
like being lazy and looking for a hand-out, or social circumstances like lack of education and opportunity. And these
attitudes can, in turn, affect our actions. Activists and politicians know this well and they can use it to their advantage to
persuade people in different ways.

In the late 1970s and 80s, psychologist Richard Petty and John Cacioppo developed a dual process theory of
understanding how persuasion works. The first part of their model is known as the Central Route Persuasion and it
involves calling on basic thinking and reasoning to convince people. This is what's at work when interested people focus
on the evidence and arguments at hand, and are persuaded by the actual content of the message. So when you're
watching a political debate, you might be persuaded by a candidate's particular policies, positions or voting history.
That is, the stuff they're actually sayin'. But we all know that persuasion involves more than that. There is also Peripheral
Route Persuasion at work. This influences people by the way of incidental cues, like a speaker's
physical attractiveness or personal relatability. There's not a lot of hard thinking going on here, it's more of a gut
reaction. So you might decide to vote for a particular candidate because you think they're cute or they're from your
home town.
Peripheral Route Persuasion happens more readily when you're not paying a ton of attention, which is why billboards
and television ads can be scarily effective. So that's how politicians and advertisers and maybe bosses and teachers and
pushy friends try to change our behaviour by changing our attitudes. But, it turns out that the reverse is true too. Our
attitudes can be affected by our behaviors. You might have heard about the phrase, "Fake it till you make it." Meaning,
if you smile when you're actually sad the act of smiling may carry you through an attitude change until you actually feel
better.

Sometimes we can manipulate ourselves this way, but it's also an incredibly effective method people use to persuade
each other. It generally works best in increments, through what psychologists call the foot-in-the-door phenomenon.
People tend to more readily comply with a big request after they've first agreed to smaller more innocuous requests.
Like Darth Vader didn't just go from "Go get 'em Anakin," to Dark Lord overnight. He was slowly enticed to the dark side,
by a series of escalating actions and attitude changes. Do this favor for me, now run this errand, now kill these
Padawans. Now blow up a planet! What started this small actions went on to become big ones, suddenly transforming
Vader's belief's about himself and others. There's plenty of experimental evidence that moral action really does
strengthens moral convictions, just as amoral action strengthen amoral attitudes. And there is perhaps no better
example of this than the Stanford Prison Experiment.

Back in 1971 Stanford psych professor Philip Zimbardo and his team put an ad in the local paper looking for volunteers
to participate in a 14 day experiment. After screening around 70 applicants, 24 male college students were deemed
physically and mentally fit enough to participate in the study. For their troubles they'd each be given $15 a day.
The participants didn't know the exact nature of the experiment, just that it involved a fake prison situation. And with a
coin flip, half were randomly deemed prisoners and the other half guards. The guards were told that it was the
prisoner's behavior that was being studied. The prisoners weren't told much of anything, aside from that they had been
arrested and taken to prison. Other than that neither group had many specific instructions. Zimbardo wanted to observe
how each party adapted to their roles, and so, on a quiet Sunday summer morning in Palo Alto, real cops swooped in
and arrested the prisoners in their homes under charges of robbery. They were frisked, handcuffed, and read their
rights. Back at the station, they were formally booked and then blindfolded in a holding cell wearing only hospital
gowns. The researchers had taken great care to make sure that the setting was extremely realistic, which is one reason
they used real cops in the arrest before handing the prisoners over to the fake guards. And it took no time at all for this
role-playing
to become really, really real.

The initial trauma of the humiliation of the arrest, the booking, strip-searching and waiting, immediately kicked off a loss
of identity in the prisoners. A few prisoners only made it through the first night before they became too emotionally
distressed and had to be released. Things only went downhill from there. Though the guards could act any way they
wanted as long as they didn't physically hurt anyone, encounters quickly became cruel, hostile, and dehumanizing.
Guards hurled insults and commands, referred to the prisoners only by number, and put some of them in solitary
confinement. Prisoners started breaking down, others rebelled, and still others became passively resigned as if they
deserved to be treated so badly. Things got bad enough that the experiment ended after only six days, causing relief in
the fake prisoners, while interestingly leaving some fake guards feeling angry.

Luckily, everyone involved bounced back to normal once out of the prison setting. All of those negative moods and
abusive behaviors were situational, and that fact reinforced the important concept that the power of a given situation
can easily override individual differences in personality. Although it would never fly by today's ethical standards,
Zimbardo's
famous study remains influential today because it sheds such a harsh light on the nature of power and corruption.
And yet, people differ. Many people succumb and become compliant in terrible situations, but not everyone does. Lots
of people risked their lives to hide Jewish people in World War II, help runaway slaves along the Underground Railroad,
keep Tutsi refugees safe during the Rwandan genocide, or generally refuse to comply or participate in actions they didn't
believe in. Some people can, and do resist turning to the dark side, even when it seems like everyone around them is
going mad. And yet, the fact is, these people tend to be in the minority. So why? Why does it seem so easy to rationalize
a negative action or attitude and so hard to muster the positive ones? One partial explanation comes from American
social psychologist Leon Festinger's theory of cognitive dissonance. It's one of the most important concepts in
psychology. Festinger's theory begins with the notion that we experience discomfort - or dissonance
- when our thoughts, beliefs, or behaviors are inconsistent with each other.

Basically, we don't like to confuse ourselves. For example, if Bruno was generally considered a peaceful person but finds
himself suddenly punching at his friend over a fender-bender, he's likely experiencing some level of cognitive
dissonance. So, by Festinger's thinking, Bruno might relieve this tension by actually modifying his beliefs in order to
match the action's he's already committed, like telling himself, "Turns out, I'm not such a nice guy after all, maybe I'm
actually a bully." On the other hand, he might resolve his internal tension by changing how he thinks about the situation.
He might still think of himself as a peaceful person, but realize that an unusual situation led to an unusual action, like,
he'd had a bad day and it was his mom's new car, or his friend was just really askin' for it. So, he can keep being the
ordinarily peaceful guy he was before. It's kind of an inverted fundamental attribution error if you think about it.
Attributing a person's actions mainly to the situation, instead of his personality. The point is that this mismatch between
what we do and who we think we are induces tension - cognitive dissonance - and that we tend to want to resolve that
tension. That's part of what turns an Anakin into a Darth Vader, and then, if we're lucky, back into an Anakin.
Social Influence

If someone in a position of authority told you to like, stop walking on the grass, you would stop walking on the grass,
right? And if they told you to help someone's grandma cross the street, or pick up your dog's poop, or put your shoes on
before you go into a store, you'd probably comply. But what if they ordered you to physically hurt another person?
You're probably thinking "No way! I could never do something like that." But there's a good chance you're wrong. In the
early 1960s, Yale University psychologist Stanley Milgram began what would become one of social psychology's most
famed and chilling experiments. Milgram began his work during the widely publicized trial of World War II Nazi war
criminal Adolf
Eichmann. Eichmann's defense, along with other Nazis', for sending millions of people to their deaths, was that he was
simply following the orders of his superiors. And while that may have been true, it didn't fly in court and Eichmann was
ultimately executed for his crimes. But the question got Milgram to thinking, what might the average person be capable
of when under orders? So, for his initial experiment, Milgram recruited forty male volunteers using newspaper ads. He
built a phony "shock generator" with a scale of thirty switches that could supposedly deliver shocks in increments from
30 volts up to 450 volts, labeled with terms like "slight shock" to "dangerous shock" up to simply "XXX." He then paired
each volunteer participant with someone who was also apparently a participant, but was in fact one of Milgram's
colleagues, posing as a research subject. He had them draw straws to see who would be the "learner" and who would be
the "teacher." The volunteers didn't realize that the draw was fixed so that they'd always be the teacher, while
Milgram's buddy would be the learner. So the fake learner was put into a room, strapped to a chair, and wired up with
electrodes. The teacher, the person who was being studied, and a researcher who was played by an actor, went into
another room with a shock generator that the teacher had no idea was fake. The learner was asked to memorize a list of
word pairs, and the participant was told that he'd be testing the learner's recall of those words and should administer an
electric shock for every wrong answer, increasing the shock level a little bit each time. From here, the pretend learner
purposely gave mainly wrong answers, eliciting shocks from the participant. If a participant hesitated, perhaps swayed
by the learner's yelps of pain, the researcher gave orders to make sure he continued. These orders were delivered in a
series of four prods.

The first was just "Please continue," and if the participant didn't comply, the researcher issued other prods until he did.
He'd say "The experiment requires you to continue" and then "It's absolutely essential that you continue" and finally
"You have no choice but to continue." Even Milgram was surprised by the first round of experiments. About two-thirds
of the participants ended up delivering the maximum 450 volt shock. All of the volunteers continued to at least 300
volts. Over years, Milgram kept conducting this experiment, changing the situation in different ways to see if it had any
effect on people's obedience. What he repeatedly found was that obedience was highest when the person giving the
orders was nearby and was perceived as an authority figure, especially if they were from a prestigious institution.

This was also true if the victim was depersonalized, or placed at a distance such as in another room. Plus, subjects were
more likely to comply with the orders if they didn't see anyone else disobeying, if there were no role models of defiance.
In the end, Milgram's path-breaking work sheds some seriously harsh light on the enormous power of two of the key
cornerstone topics of social psychology: social influence and We all conform to some sort of social norms, like following
traffic laws or even obeying the dress codes for different roles and environments. When we know how to act in a certain
group or setting, life just seems to go more smoothly. Some of this conformity is non-conscious automatic mimicry, like
how you're likely to laugh if you see someone else laughing or nod your head when they're nodding. In this way, group
behavior can be contagious.

But overall, conformity describes how we adjust our behavior or thinking to follow the behaviour or rules of the group
we belong to. Social psychologists have always been curious about the degree to which a person might follow or rebel
against their group's social norms.
During the early 1950s, Polish-American psychologist Solomon Ash expressed the power of conformity through a simple
test. In this experiment, the volunteer is told that they're participating in a study on visual perception and is seated at a
table with five other people. The experimenter shows the group a picture of a standard line and three comparison lines
of various lengths, and then asked the people to say which of the three lines matches the comparison line. It's clear to
anyone with any kind of good vision that the second line is the right answer, but the thing is, most, if not all of the other
people in the group start choosing the wrong line. The participant doesn't know that those other people are all actors, a
common deception used in social-psychological research, and they're intentionally giving the wrong answer. This causes
the real participant to struggle with trusting their own eyes or going with the group. In the end most subjects still gave
what they knew was the correct answer, but more than a third were essentially just willing to give the wrong answer to
mesh with the group. Ash, and subsequent researchers, found that people are more likely to conform to a group if
they're made to feel incompetent or insecure and are in a group of three or more people, especially if all those people
agree. It also certainly doesn't hurt if the person admires the group because of maybe their status or their
attractiveness, and if they feel that others are watching their behavior. We also tend to conform more if we're from a
culture that puts particular emphasis on respect for social standards. This might sound a little bit familiar, like, all of high
school, fraternities or sororities, the big company you work for, or any other group that you've ever been a part of. The
classic experiments of Milgram and Ash showed us that people conform for lots of different reasons, but they both
underscored the power of situation in conformity – whether that situation elicits respect for authority, fear of being
different, fear of rejection, or simply a desire for approval. This is known as normative social influence, the idea that we
comply in order to fuel our need to be liked or belong.

But, of course, groups influence our behavior in more ways than just conformity and obedience. For example, we may
perform better or worse in front of a group. This is called social facilitation and it's what might, say, help you sprint the
last hundred meters of a race if people are cheering you on, but it's also what can make you nervous enough to forget
the words to that poetry you were supposed to be slamming in front of a crowd. But that's what can happen in front of a
group, what happens when you're actually part of a group? Do you work harder or start slacking? One study found that
if you blindfold students, hand them a rope and tell them to pull as hard as they can in a game of tug-of-war, the
subjects will put in less work if they think they're part of the team instead of pulling by themselves. About 20% less, it
turns out. This tendency to exert less effort when you're not individually accountable is called social loafing. That's
pretty good.
You can now add the word "loafing" to your scientific vocabulary. But a group's ability to either arouse or lessen our
feelings of personal responsibility can make us do more dangerous things than just phone in some group homework
assignment. It can also lead to deindividuation, the loss of self-awareness and restraint that can occur in group
situations. Being part of a crowd can create a powerful combination of arousal and anonymity; it's part of what fuels
riots and lynch mobs and online trolling. The less individual we feel, the more we're at the mercy of the experience of
our group, whether it's good or bad. And it should come as no surprise that the attitudes and beliefs we bring to a group
grow stronger when we talk with others who share them. This is a process psychologists know as group polarization, and
it often translates into a nasty "us" vs "them" dynamic.

And you know what is great at polarizing groups? The internet. The internet has made it easier than ever to connect like-
minded people and magnify their inclinations. This can of course breed haters, like racists may become more racist in
the absence of conflicting viewpoints, but it can, and often does, work for good, promoting education, crowd-sourcing
things like fundraising, and organizing people to fight all kinds of worldsuck. And group dynamics can not only affect our
personal decisions, they can also influence really big decisions on a larger, even national scale. Groupthink is a term
coined by social psychologist Irving Janis, to describe what happens when a group makes bad decisions because they're
too caught up in the unique internal logic of their group. When a group gets wrapped up in itself and everyone agrees
with each other, no one stops to think about other perspectives.

As a result, you get some big and bad ideas, including some enormous historical fiascoes, like the Watergate cover-up
and the Bay of Pigs invasion and the Chernobyl nuclear reactor accident. So while two heads may often be better than
one, it's important to make sure those heads are still open to different opinions or they could do some really dumb stuff.
In the end, it's best to understand ourselves and our decisions as informed simultaneously by both individual and group
factors, personality, and situation. And don't get too freaked out about what people are capable of; I mean, just think
back to Milgram's experiment. For the two-thirds of us who would shock someone to death in the right circumstance,
there's another third who wouldn't, reminding us that while group behavior is powerful, so is individual choice. Today
you learned about the power of social influence, conformity, and authority. We looked at the shocking results of the
famous Milgram experiment, the concept of automatic mimicry, and how Solomon Ash proved the power of conformity
in situation. You also learned how normative social influence sways us, how social facilitation can make or break your
performance and how social loafing makes people lazy in a group. And finally, we discussed how harmful
deindividuation, group polarization, and groupthink can be.
Aggression vs Altruism

Let me tell you about Robber's Cave. In 1954, a group of 11 boys, all about 12 years old, were invited to a special
summer camp in the deep woods of southeastern Oklahoma, at a place called Robber's Cave State Park. None of the
boys knew each other, although they all came from similar backgrounds. They spent their days bonding over things like
games and swimming and treasure hunts, and in no time, they formed a tight friendly group. They even came up with a
name for themselves: the Rattlers. But soon they began to notice something. No, not a guy in the woods with a hockey
mask, there was another group of boys also 11 of them, also the same age, that had been staying at the other end of the
park the whole time. The Rattlers never interacted with these other boys, so they didn't know that those kids were also
spending time bonding over games and swimming and treasure hunts, and that they'd come up with a name for
themselves, too: the Eagles. But the Rattlers didn't like the look of the Eagles, oh, no, they didn't like them using their
baseball diamond or their dining hall. And the feeling was mutual. It didn't take long for each group to start complaining
to the camp's counselors about the other gang, and eventually, they both said that they wanted to set up a contest to
determine once and for all which group was better. The counsellors were only too happy to comply, because as I'm sure
you've figured out by now, those counselors were actually researchers. The man who set up what would be
remembered as the Robber's Cave Experiment was Turkish American social psychologist Muzafer Sherif. He was
interested in what it would take for rivals to overcome their differences and resolve their conflicts. Specifically, Sherif
wanted to test something called Realistic Conflict Theory. He hypothesized that conflict happens when you combine
negative prejudices with competition over resources, and the boys at Robber's Cave were well on their way to proving
him right.
Over the next couple of days, the Rattlers and the Eagles competed against each other for prizes in a series of games,
like tug-of-war and foot races, and soon, what started as your basic trash talking and taunting and name-calling
morphed into fist-fights, thefts, and raids on each others' cabins. But then their dynamics changed or were changed for
them.
After the games were over, the researchers integrated the groups and gave the kids shared goals that they could only
achieve through cooperation. The tide quickly turned. All 22 boys worked together to move a stalled truck that was
carrying their food, they took care of a partially felled tree that was deemed a danger to the camp, they collaborated in
setting up tents, even though they weren't given complete sets of equipment. While isolation and competition made
enemies of the strangers, shared goals and cooperation turned enemies into friends.

Over the past 39 weeks, we've learned a lot about ourselves, our emotions and our personalities, how our minds can get
sick, how we can help them get well again, why we can do vicious things and then turn around and act like heroes. So
maybe it's fitting that we wrap up this course by looking at a couple of opposing forces that some consider the very
definition of human nature: aggression and altruism. Conflict and cooperation.

You might think of it as the psychology of war and peace, or simply, what we can all learn from a bunch of 12 year olds.
In psychology, aggression is defined as "behavior intended to hurt or destroy someone, something, or even yourself."
People aggress, as psychologists say, in all kinds of ways, verbally, emotionally, and physically, and for lots of different
reasons: out of anger, to assert dominance, or as a response to fear. But that's just a glimpse into why someone might
become aggressive. Where does the aggression actually come from?

Like a lot of behaviors we've talked about it, it seems to emerge from that familiar combination of biological factors, like
genetic, neurological, and biochemical influences, and our environment and experience. In terms of genetic influences,
studies of twins, and yes, Crash Course Psychology might have been called Crash Course Studies of Twins, showed
that if one identical twin has a violent temper, often the other one does, too, but fraternal twins are much less likely to
be so similar. Neurologically speaking, no single area of the brain controls aggression, but certain areas like the limbic
system do appear to facilitate it. Research on violence and criminality has also revealed a link between aggression and
diminished activity in the frontal lobes, which play a vital role in impulse control.

And finally, our aggressiveness can be influenced by our own biochemistry, hormones like testosterone and
glucocorticoids and pheromones have all been implicated in animal models of aggression. It's a little trickier in humans,
it's a lot trickier in humans, but it's highly likely that our hormones are intimately linked with feeling and showing
aggression. Obviously, aggression isn't just about biology. Psychological and cultural factors also play an important role,
as does the power of the situation. For example, there's the Frustration-Aggression Hypothesis, the simple idea that
people become aggressive when they're blocked from reaching a goal. To demonstrate, consider the not-very aggressive
sport of baseball.

There's a study that analyzed 44 years worth of baseball stats, and focused on the more than 27,000 incidents when a
pitcher hit a batter with a ball. It turned out that this was most likely to occur if the pitcher was frustrated by a recent
home run or if one of his own teammates had been hit by a pitch in the previous inning. But we also learn aggression by
watching others. Like, if you grew up watching your parents throw popcorn and jeering lewdly at their most hated
soccer team, you might have learned something from their behavior. So combine all of those biological factors and
funnel them through a particular person with a particular history in a particular situation, and you can begin to see how
aggression can have many roots that grow together.

Thankfully, though, humans are more than their bad tempers. While some things in people will leave us annoyed and
angry, others breed friendship and affection. So yes, there are positive topics in social psychology, like altruism, our
selfless, even self-sacrificing regard for the welfare of others. This could be something as simple as jumpstarting a
stranger's car or as heroic as running into a burning building to save someone. But if being altruistic is so awesome, why
aren't we all that way all the time? Or maybe the better question is, why do we ever do anything selfless, like, what's in
it for us?

In the late 1960s, social psychologists Bibb Latane and John Darley conducted a series of experiments examining when
and why we help others. In one experiment, they placed a subject in a room, sometimes alone, sometimes with two
other subjects, and sometimes with two actors posing as subjects. Then, they simulated an emergency by filling the
room with smoke and waited to see if the subject would do anything to alert the others or help themselves. If the
subject was alone, they'd report the smoke 75% of the time. But subjects in a group of three only spoke up 38% of the
time. And when they were stuck in the room with two oblivious actors, only 10% of the participants said anything to the
others. Darley and Latane found that people typically helped others only if they noticed the incident, interpreted it as an
emergency, and then finally, assumed responsibility, and all of these things were much more likely to occur if a person
was alone, while the presence of others deterred the person from helping.
This kind of diffusion of responsibility referred to as the bystander effect, can weaken our instinct for altruism. The
bystander effect is a bit like the concept of social loafing that we talked about. If you're around other people, it's easier
to think that someone else is going to pick up the slack or in this case, come to the rescue. When people do decide to
help others, they may do it for a number of reasons. One perspective is that we tend to help others mainly out of self-
interest. By this thinking, helping really isn't altruistic at all, and instead, our actions boil down to a sort of cost-benefit
analysis. Like, maybe we'd turn in a lost wallet because we're hoping for a reward or we pitch in on a project at work
because we think we'll get recognized and promoted by our bosses. Social psychologists contextualize these kinds of
examples in the broader theory of social exchange. When it comes to doing things for other people, we're always trying
to maximize our personal rewards while minimizing our costs. But social exchange doesn't have to be as selfish as that, it
can also mean that we act altruistically because we expect that the people we help will go on to help others, so if we
give someone a hand changing a tire, maybe they'll stop next time they see someone else, maybe even us, broken down
on the side of the road. You might know this concept, sometimes it's called the norm of reciprocity, sometimes it's called
paying it forward.

And then there's the social responsibility norm, which is the simple expectation that people will help those who depend
on them, like any parent can expect to give more help than they're going to receive from young children. That's just part
of being a parent. Naturally, the world would be a delightful place if altruism were the standard for human behavior, but
then, psychology wouldn't be nearly so interesting. In some ways, you might say that what fuels conflict is the opposite
of altruism: self-interest.
Social psychologists view conflict as any perceived incompatibility of actions, goals, or ideas. That could mean two
nations fighting over a border, sparring religious or political groups, or you and your boo fighting over whose turn it is to
do the dishes. And in a weird conundrum of human behavior a lot of conflicts arise from what psychologists call "a social
trap," where people act in their own short term self-interest, even though it takes a toll on the larger group and on
themselves over the long-term. You see this kind of thing all the time on an individual scale, like in a crime movie, when
a criminal just betrays all of his criminal friends to get the big payout, it doesn't turn out very well for him in the end. But
on a larger scale, you can find social traps taking their toll on the environment, like when we poach elephants to sell
their ivory or cut down old growth forests to make a quick buck in the lumber market. Either way, when self-interest
succeeds in wrecking the collective interest by, say, depleting some limited resource, it becomes easy to start viewing
our neighbors as competitors, taking us right back to the ingroup vs. outgroup mindset that we all know causes big
problems.

So as long as there's self interest, there's gonna be conflict. But before you get all down on humanity, remember those
Robber's Cave boys. They were ready to go full-on Lord of the Flies before shared goals forced them to cooperate and
ultimately, make peace. The power of cooperation to make friends of former enemies is one of the most hopeful areas
of psychological research. If greed and self-interest can destroy the world, perhaps cooperation can save it.

Das könnte Ihnen auch gefallen