Sie sind auf Seite 1von 27

Running Head: SEX EDUCATION 1

Sex Education

Riya Bhadana

Indraprastha College For Women, University of Delhi


Sex Education


Sex education is instruction on issues relating to human sexuality, including emotional relations

and responsibilities, human sexual anatomy, sexual activity, sexual reproduction, age of consent,

reproductive health, reproductive rights, safe sex, birth control and sexual abstinence. Sex

education that covers all of these aspects is known as comprehensive sex education. Although

some form of sex education is part of the curriculum at many schools, it remains a controversial

issue in several countries, particularly with regard to the age at which children should start

receiving such education, the amount of detail which is revealed, and topics dealing with human

sexuality and behavior (eg. safe sex practices and masturbation, and sexual ethics).You can't opt

your children in or out of math, but when it comes to sex education, one of the most important

things you can learn in school, a parent can take their kid out for no reason at all. Sex education

is important. It's been proven time and time again. Students who receive formal sex education in

schools are shown to first have sexual intercourse later than students who have not had sex

education. Sex education does not encourage teenagers to have sex, it does quite the opposite

(Gabe L., 2016). Planned Parenthood is the nation's largest provider of sex education.

Mcgaughey (2018) studied the unique demographic and cultural drivers for young

traveler marriage and pregnancy. Mcgaughey found that engaging in pre-marital sex and being

pregnant outside of marriage are not only seen as bringing shame to the traveler girl and her

family, but may lead to retribution of the girl. Marsman and Herold (1986) found that most

supported the teaching of sex education but were divided about which values should be taught.

Whereas more than half disapproved of premarital sex, only one-third believed that an important

objective of sex education should be to discourage premarital sex. Anandhi (2007) suggested that

both the opponents and the proponents of the Adolescence Education Program (sex education)

share the same ideological premise of sexual restraint as a national virtue. Oettinger (1999)

found that sex education in the 1970's had some causal impact on teen sexual behavior, probably

in significant part by providing information that enabled teens to alter the risks of sexual activity.

In a study by Pearson (1979) a survey concerning the implementation and content of a potential

sex education program was sent to parents of enrolled students at M.S.S.D. Some 93% of the

respondents were in favor of a sex education curriculum. More than 50% of the parents

supported all 15 topics listed in the survey. Study by Scales & Kirby (1983) provides the median

and mean ratings of the individual barriers and a factor analysis of the most highly-rated barriers

to sex education. The single greatest barrier to sex education was administrators' fear of

community opposition. In order to determine the kinds of formal sex education programs within

residential facilities for the deaf, a sex education survey was conducted by Gerald & Gerald

(1976). It was found that many of these programs seemed to be crisis oriented and thus it is

hoped that the development of ongoing, long-range and more comprehensive programming will

be forthcoming in the area of sex education. Malfetti & Rubin (1968) found that despite general

support, there is no substantial agreement on either objectives or content of sex education. In

addition, there is a shortage of qualified teachers. A survey of teacher-preparation institutions

found but few offering courses specifically designed to prepare teachers of sex education and

even fewer planning to do so. Tjaden (1988) states that little is known about the relationship

between pornography and sex education. There is little scientific information on how

pornography teaches people about sex, whom precisely it teaches about sex, or what it teaches

people about sexuality relative to other sources of information. Wiechmann & Ellis’s (1969)

study of the Effects of "Sex Education" on Premarital Petting and Coital Behavior suggested that

students who had received "sex education" were not found to have significantly more premarital

petting or coital experience than those without "sex education." No group differences were found

when premarital petting and coital experience of students with "sex education" exposure were

compared on the basis of the grade level of first exposure. Sabia (2006) found that while sex

education is associated with adverse health outcomes, there is little evidence of a causal link after

controlling for unobserved heterogeneity via fixed effects and instrumental variables. A study by

Bass (1974) found that parents and teachers are aware of the need of sex education but little is

being taught in the special education classes or in schools for the handicapped. Some basic

principles of sex education are listed along with suggestions for teaching the blind, the deaf, and

the retarded. Drazenovich, (2015) argues that as a sex education strategy, the essentializing of

sexual identity within sex education should be supplanted by more constructivist approaches;

ones that allow for maximum individuation and self-expression. Queer-positive sex educators

might consider adopting some methods associated with spiritual pedagogy to assist students in

rethinking questions of sexuality and creating new possibilities for identities and creative self-

expression. Buston, Wight & Scott (2001) state that within schools, the values, experiences and

characteristics of individual classroom teachers are important in understanding what sex

education is actually delivered, particularly where the Guidance Team lacks cohesion. Kirkendall

& Hamilton (1954) found that sex education is being redefined so as to include both

psychological and emotional aspects of sex. Such matters as inter-sex associations, sex roles,

masculine and feminine adaptation and adjustment, emotional maturity and personality

development are more and more recognized as necessarily including sex education, and as being

conditioned by sex in its various manifestations. Libby (1970) found that the great majority of

parents approved high school sex education integrated throughout the curriculum, as well as

requiring a course in family life and sex education. Most parents wanted chastity taught as "best"

if sex education is to be required, and at the same time wanted contraceptive education, but also

wanted sex education taught in a context of God, marriage, and parenthood.

The Present Study

Sex education is a kind of holistic education. It teaches an individual about self-

acceptance and the attitude and skills of interpersonal relationship. It also helps an individual to

cultivate a sense of responsibility towards others as well as oneself. It is time for us to understand

the need and importance of sex education for our growing children that is as essential as formal

education in developing a normal, healthy and aware individual. This study aims to examine the

current scenario and focuses on people’s contrasting perspectives regarding sex education by

drawing parallels between two generation’s perceptions about sex education.


Survey is a method which uses questionnaires or interviews to obtain information about many

people (Passer & Smith, 2013). Survey research is the collection of data attained by asking

individuals questions either in person, on paper, by phone or online (Rouse). Kraemer (1991)

identified three distinguishing characteristics of survey research. First, survey research is used to

quantitatively describe specific aspects of a given population. Second, the data required for

survey research are collected from people and are, therefore, subjective. Finally, survey research

uses a selected portion of the population from which the findings can later be generalized back to

the population. Verbal surveys are often known as interviews and written surveys are

questionnaires (Glasow, 2005). Decision makers in both the public and private sectors use survey

results to understand past efforts and guide future direction. Yet there are many misperceptions

regarding what is required to conduct a good survey. Poorly conceived, designed, and executed

surveys often produce results that are meaningless, at best, and misleading or inaccurate, at

worst. The resultant costs in both economic and human terms are enormous. (Cowles and

Nelson, 2015) The aim of survey research is to measure certain attitudes and/or behaviors of a

population or a sample. The purpose of survey is to collect information from many individuals,

hoping to understand them as a whole. Survey research focuses on naturally occurring

phenomena. Rather than manipulating phenomena, survey research attempts to influence the

attitudes and behaviors it measures as little as possible. Most often, respondents are asked for

information. Survey research is primarily quantitative, but qualitative methods can be used too.

Survey research is a widely used data collection method that involves getting information from

people typically by asking them questions and collecting and analyzing the answers. Such data

can then be used to understand individuals’ views, attitudes, and behaviors in a variety of areas

such as political issues, quality of life at both the community and individual levels, and

satisfaction with services and products, to name but a few.


The modern survey goes back to ancient forms of the census. Early census assessed the property

available for taxation or the young men available for military service. Surveys for social research

in the United States and Great Britain began with social reform movements and social service

professions documenting the conditions of urban poverty. Scientific sampling and statistics were

initially, absent in the surveys. At first surveys were merely overviews of an area based on

questionnaires and other data. The Social Survey grew into both the modern quantitative survey

research and qualitative field research in a community. From the 1890s to the 1930s, it was the

major method of social research practiced by the Social Survey Movement; they used systematic

empirical inquiry to support socio-political reform goals. By mid-1940s, the modern quantitative

survey had largely displaced it. Early social surveys were detailed empirical studies of specific

local areas based on many sources of quantitative and qualitative data. Most were exploratory

and descriptive. Researchers wanted to inform the public of the problems of industrialism and

provide information for democratic decision making.

Survey Research expanded during World War II, especially in the United States. Survey

researchers studied morale, consumer demand, production capacity, enemy propaganda, and the

effectiveness of bombing. After World War II, officials dismantled the extensive government

survey research establishment. This was a cost cutting move. Many researchers returned to

universities and created new social research organizations. Within three years of the end of

World War II, national survey research institutions had been established in France, Norway,

Germany, Italy, Netherlands, Czechoslovakia and Britain. At first, universities were hesitant to

embrace the new survey research centers. They were expensive and employed many people.

Since the 1970s, quantitative survey research has become huge in private industry, government,

and in many academic fields (e.g., communication, education, economics, political science,

public health, social psychology, and sociology). By the mid to late 2000s web surveys were

common and today there has clearly been a shift to what are called mixed-mode surveys, which

rely on a combination of face-to-face, mail, phone, and web-based surveys along with new

technologies that have appeared such as the Inter- active Voice Response survey where

respondents use their touch-tone phone to record their answers and Audio-Computer-Assisted

Self- Interviews, which are respondent-administered surveys on a computer (Neuman, 2006).


Survey is an efficient method for systematically collecting data from a broad spectrum of

individuals and educational settings. Surveys are efficient in that many variables can be

measured without substantially increasing the time or cost. Surveys are capable of obtaining

information from large samples of the population. They are also well suited to gathering

demographic data that describe the composition of the sample (McIntyre, 1999, p. 74). Surveys

are inclusive in the types and number of variables that can be studied, require minimal

investment to develop and administer, and are relatively easy for making generalizations (Bell,

1996, p. 68). Surveys can also elicit information about attitudes that are otherwise difficult to

measure using observational techniques (McIntyre, 1999, p. 75). Survey data can be collected

from many people at relatively low cost and, depending on the survey design, relatively quickly.


Surveys are more expensive and time-consuming than most laboratory experiments using captive

participant pools. However, many cost-saving approaches can be implemented. It the

impracticality of executing elaborate scripted scenarios for social interaction, especially ones

involving deception. Whereas these sorts of events can be created in labs with undergraduate

participants, they are tougher to do in the field (Lavrakas, Krosnick, & Visser, 2013). Surveys

are generally unsuitable where an understanding of the historical context of phenomena is

required (Pinsonneault and Kraemer, 2013). Bell (1996) observed that biases may occur, either in

the lack of response from intended participants or in the nature and accuracy of the responses

that are received. Other sources of error include intentional misreporting of behaviors by

respondents to confound the survey results or to hide inappropriate behavior. Finally,

respondents may have difficulty assessing their own behavior or have poor recall of the

circumstances surrounding their behavior. It is important to note, however, that surveys only

provide estimates for the true population, not exact measurements (Salant & Dillman, 1994, p.



Surveys offer the opportunity to execute studies with various designs, each of which is suitable

for addressing particular research questions of long-standing interest to social psychologists

Panel Surveys

In a panel survey, data are collected from the same people at two or more points in time. Perhaps

the most obvious use of panel data is to assess the stability of psychological constructs and to

identify the determinants of stability (Krosnick 6 Alwin, 1989).Although people are often quite

willing to participate in a single cross-sectional survey, fewer may be willing to complete

multiple interviews (Krosnick, 1988).

Cross-Sectional Surveys

Cross-sectional surveys involve the collection of data at a single point in time from a sample

drawn from a specified population. This design is most often used to document the prevalence of

particular characteristics in a population. For example, cross-sectional surveys are routinely

conducted to assess the frequency with which people perform certain behaviours or the number

of people who hold particular attitudes or beliefs (Lavrakas, Krosnick, & Visser, 2013) Cross-

sectional data can be used to identify the moderators of relations between variables, thereby also

shedding some light on the causal processes at work (Krosnick, 1988b)

Longitudinal Design

It surveys the sample of same group of participants two or more times over an interval of time

that may last a number of years. In principles, it allows for the effects of systematic factors

associated with the passage of time, such as learning, maturation or aging, to be assessed, while

controlling for difference between the groups. Two difficulties weaken this design. First,

unpredicted events occurring between data collection may introduce a confounding influence so

that data obtained at later points reflect both the passage of time and the unpredicted events.

Second, the initial sample may reduce in size as time passes because their members die or move

away or not available for other reasons. Consequently, the character of the sample may change

significantly and compromise the comparability of earlier and later result.( Research Design and

Data Collection Part II.)


Telephonic Survey

Instead of interviewing respondents in person, researchers rely on telephone interviews as their

primary mode of data collection. And whereas computerized data collection is a relatively recent

development in face-to-face inter- viewing most large-scale telephone survey organizations have

been using such systems for the past decade. In fact, computer-assisted telephone interviewing

(CATI) has become the industry standard, and several software packages are available to simplify

computer programming. Like CAPI, CATI involves interviewers reading from a computer screen,

on which each question appears in turn. Responses are entered immediately into the computer.

(Lavrakas, Krosnick, & Visser, 2013)

Face-to-face interviews

Face-to-face interviews involve the oral presentation of survey questions, sometimes with visual

aids .Collection often requires a large staff of well-trained inter- viewers who visit respondents in

their homes. But this mode of data collection is not limited to in-home interviews; face-to-face

interviews can be conducted in a laboratory or other locations as well. Until recently, interviewers

always recorded responses on paper copies of the questionnaire, which were later returned to the

researcher. (Lavrakas, Krosnick, & Visser, 2013)

Self-administered questionnaire/Mail Survey

Often, questionnaires are mailed or dropped off to individuals at their homes, along with

instructions on how to return the completed surveys. Alternatively, people can be intercepted on

the street or in other public places and asked to compete a self-administered questionnaire, or such

questionnaires can be distributed to large groups of individuals gathered specifically for the

purpose of participating in the survey or for entirely unrelated purposes (e.g., during a class period

or at an employee staff meeting). Whatever the method of distribution, this mode of data collection

typically requires respondents to complete a written questionnaire and return it to the researcher.

Recently Paper-and-pencil self-administered questionnaires have sometimes been replaced by

laptop computers, on which respondents proceed through a self-guided program that presents the

questionnaire. When a response to each question is made, the next question appears on the screen,

permitting respondents to work their way through the instrument at their own pace and with

complete privacy. Computer assisted self-administered interviewing (CASAI), as it is known, thus

affords all of the ad- vantages of computerized face-to-face and telephone interviewing, along with

many of the advantages of self-administered questionnaire. (Lavrakas, Krosnick, & Visser, 2013)

Web-based survey

They are very fast and inexpensive; they allow flexible design and can use visual images and even

audio or video. The two types of Web surveys are static and interactive. A static Web or e-mail

survey is like the presentation of a page of paper but on the computer screen. An interactive Web

or e-mail survey has contingency questions and may present different questions to different

respondents based on prior answers. ( Neuman, 2011)


Questions are the centerpiece of survey research. Because the way they are worded can have a

great effect on the way they are answered, selecting good questions is the single most important

concern for survey researchers.

 Write clear questions- Question writing for a particular survey might begin with a

brainstorming session or a review of previous surveys. Every question that is considered

for inclusion must be reviewed carefully for its clarity and ability to convey the intended

meaning. Questions that were clear and meaningful to one population may not be so to

another. Nor can you simply assume that a question used in a previously published study

was carefully evaluated. Adherence to a few basic principles will go a long way toward

developing clear and meaningful questions.

 Avoid confusing phrasing- In most cases, a simple direct approach to asking a question

minimizes confusion. Use shorter rather than longer words: brave rather than courageous;

job concerns rather than work-related employment issues (Dillman, 2000). Use shorter

sentences when you can. A lengthy question often forces respondents to “work hard,” that

is, to have to read and reread the entire question. Lengthy questions can go unanswered or

can be given only a cursory reading without much thought.


 Avoid vagueness- Questions should not be abbreviated in a way that results in confusion.

It is particularly important to avoid vague language; there are words whose meaning may

differ from respondent to respondent.

 Provide a frame of reference- Questions often require a frame of reference that provides

specificity about how respondents should answer the question.

 Avoid double negatives and negative words- Respondents have a hard time figuring out

which response matches their sentiments because some statements are written as a double

negative. Such errors can easily be avoided with minor wording changes.

 Avoid double-barreled questions- Double-barreled questions produce uninterpretable

results because they actually ask two questions but allow only one answer.

 Minimize the risk of bias- Specific words in survey questions should not trigger biases,

unless that is the researcher’s conscious intent. Such questions are referred to as leading

questions because they lead the respondent to a particular answer. Biased or loaded words

and phrases tend to produce misleading answers.



Using the fill-in-the-blank format requires you to choose the manner by which the respondent

should answer the question. One of the most common uses of fill-in-the-blanks is to determine the

name, age, gender, age group and other demographic data indicators by putting a mark on the


Multi-Option Format

As the name suggests, you present a question to the respondent and he will answer it based on

the multiple options available

Use Likert-Type Response Categories

Likert-type responses generally ask respondents to indicate the extent to which they agree or

disagree with statements. The response categories list choices for respondents to select their level

of agreement with a statement from strongly agree to strongly disagree.

Unstructured Response Formats

Unstructured response formats simply require the respondent to write his answer in detail.


Survey questions are answered as part of a questionnaire. The context created by the questionnaire

has a major impact on how individual questions are interpreted and answered. As a result, survey

researchers must carefully design the questionnaire as well as individual questions. There is no

precise formula for a well-designed questionnaire. Nonetheless, some key principles should guide

the design of any questionnaire, and some systematic procedures should be considered for refining


 Maintain consistent focus- A survey should be guided by a clear conception of the research

problem under investigation and the population to be sampled. Throughout the process of

questionnaire design, the research objective should be the primary basis for making

decisions about what to include and exclude and what to emphasize or treat in a cursory

fashion. The questionnaire should be viewed as an integrated whole, in which each section

and every question serve a clear purpose related to the study’s objective and each section

complements other sections.

 Build on existing instruments- Surveys often include irrelevant questions and fail to include

questions that, the researchers realize later, are crucial. One way to ensure that possibly

relevant questions are asked is to use questions suggested by prior research, theory,

experience, or experts (including participants) who are knowledgeable about the setting

under investigation.

 Refine and test questions- The only good question is a pretested question. Before you rely

on a question in your research, you need evidence that your respondents will understand

what it means. So try it out on a few people. One important form of pretesting is discussing

the questionnaire with colleagues. You can also review prior research in which your key

questions have been used. Professional survey researchers also use a technique for

improving questions called the cognitive interview (Dillman, 2007). Conducting a pilot

study is the final stage of questionnaire preparation. Complete the questionnaire yourself

and then revise it.

 Order the questions- The sequence of questions on a survey matters. As a first step, the

individual questions should be sorted into broad thematic categories, which then become

separate sections in the questionnaire. The first question deserves special attention,

particularly if the questionnaire is to be self-administered. This question signals to the

respondent what the survey is about, whether it will be interesting, and how easy it will be

to complete. The first question should be connected to the primary purpose of the survey;

it should be interesting, it should be easy, and it should apply to everyone in the sample

(Dillman, 2007). Question order can lead to context effects when one or more questions

influence how subsequent questions are interpreted (Schober, 1999). Prior questions can

influence how questions are comprehended, what beliefs shape responses, and whether

comparative judgments are made (Tourangeau, 1999). Some questions may be presented

in a “matrix” format. Matrix questions are a series of questions that concern a common

theme and that have the same response choices. The questions are written so that a common

initial phrase applies to each one. It is very important to provide an explicit instruction to

“Check one response on each line” in a matrix question because some respondents will

think that they have completed the entire matrix after they have responded to just a few of

the specific questions.

 Make the questionnaire attractive- An attractive questionnaire—neat, clear, clean, and

spacious—is more likely to be completed and less likely to confuse either the respondent

or, in an interview, the interviewer. An attractive questionnaire does not look cramped;

plenty of “white space”—more between questions than within question components—

makes the questionnaire appear easy to complete. Response choices are listed vertically

and are distinguished clearly and consistently, perhaps by formatting them in all capital

letters and keeping them in the middle of the page. Skip patterns are indicated with arrows

or other graphics. Some distinctive type of formatting should also be used to identify

instructions. Printing a multipage questionnaire in booklet form usually results in the most

attractive and simple-to-use questionnaire (Dillman, 2000, pp. 80–86).


Open versus Closed Questions

An open-ended question permits the respondent to answer in his or her own words (see, e.g., C.

Smith, this volume, Ch. 12; Bartholomew, Henderson, 6 Marcia, this volume, Ch. 11) t, a closed-

ended question requires that the respondent select an answer from a set of choices offered

explicitly by the researcher.

Rating versus Ranking

Practical considerations enter into the choice between ranking and rating questions as well.

Respondents could be asked this question directly (a ranking question), or they could be asked to

rate their attitudes separately, and the researcher could infer which is preferred. With this

research goal, asking the single ranking question seems preferable and more direct than asking

the two rating questions. But rank-ordering a large set of objects takes much longer and is less

enjoyed by respondents than a rating task (Elig 6 Frieze, 1979; Taylor 6 Kinnear, 1971).But

rankings are more effective than ratings, because ratings suffer from a significant problem: non

differentiation. When rating a large set of objects on a single scale, a significantly number of

respondents rate multiple objects identically as a result of survey satisficing (Krosnick, 1991b).

The order of response alternatives

The answers people give to closed-ended questions are sometimes influenced by the order in

which the alternatives are offered. When categorical response choices are presented visually, as

in self-administered questionnaires, people are inclined toward primacy effects, whereby they

tend to select answer choices offered early in a list (e.g., Krosnick 6 Alwin, 1987; Sudmaa

Bradburn, 6 Schwan, 1996). But when categorical answer choices are read aloud to people,

recency effects tend to appear, whereby people are inclined to select the options offered last

(e.g., McClendon, 1991)

No-Opinion Filters and Attitude Strength

Concerned about the possibility that respondents may feel pressure to offer opinions on issues

when they truly have no attitudes (e.g., P. E. Converse, 1964), questionnaire designers have often

explicitly offered respondents the option to say they have no opinion.

Question Wording

The logic of questionnaire-based research requires that all respondents be confronted with the

same stimulus (i.e., question), so any differences between people in their responses are due to

real differences between the people. But if the meaning of a question is ambiguous, different

respondents may interpret it differently and respond to it differently. Therefore, experienced

survey researchers advise that questions always avoid ambiguity. (Lavrakas, Krosnick, & Visser,


The cover letter

It is critical to the success of a mailed survey. This statement to respondents sets the tone for the

questionnaire. A carefully prepared cover letter should increase the response rate and result in

more honest and complete answers to the survey questions.


Mailed surveys, electronic questionnaires, and phone interviews are intended for completion by

only one respondent. The same is usually true of in-person interviews, although sometimes

researchers interview several family members at once. On the other hand, a variant of the

standard survey is a questionnaire distributed simultaneously to a group of respondents, who

complete the survey while the researcher (or assistant) waits.


As mentioned earlier, in-person interviews are the most expensive type of survey. Phone

interviews are much less expensive, but surveying by mail is cheaper yet. Electronic surveys are

now the least expensive method because there are no interviewer costs, no mailing costs, and, for

many designs, almost no costs for data entry. Of course, extra staff time and expertise are

required to prepare an electronic questionnaire.

The population

A number of characteristics of the population are relevant to selecting a mode of data collection.

For example, completion of a self- administered questionnaire requires a basic proficiency in

reading and, depending on the response format, perhaps writing.

Sampling strategy. The sampling strategy to be used may sometimes suggest a particular mode

of data collection. For example, some pre-election polling organizations draw their samples from

lists of currently registered voters. Such lists often provide only names and mailing addresses,

which limits the mode of data col- lection to face-to-face interviews or self-administered surveys.

Desired response rate

Self-administered mail surveys typically achieve very low response rates, often less than 50% of

the original sample when a single mailing is used. Techniques have been developed to yield

strikingly high response rates for these surveys, but they are complex and more costly (see Dill-

man, 1978). Face-to-face and telephone interviews of- ten achieve much higher response rates,

which reduces the potential for nonresponse error.

Question form

If a survey includes open-ended questions, face-to-face or telephone interviewing is of- ten

preferable, because interviewers can, in a standardized way, probe incomplete or ambiguous

answers to ensure the usefulness and comparability of data across respondents.

Question content

If the issues under investigation are sensitive, self-administered questionnaires may provide

respondents with a greater sense of privacy and may therefore elicit more candid responses than

telephone interviews and face-to-face interviews (e.g., Bishop 6. Fisher, 1995; Cheng, 1988;

Wiseman, 1972).

Questionnaire Length

Face-to-face data collection permits the longest interviews, an hour or more. Telephone

interviews are typically quite a bit shorter, usually lasting no more than 30 min, because

respondents are often uncomfortable staying on the phone for longer. With self-administered

questionnaires, response rates typically decline as questionnaire length increases, so they are

generally kept even shorter.

Length of data collection period


Distributing questionnaires by mail requires significant amounts of time, and follow-up mailings

to increase response rates further increase the overall turnaround time. Similarly, face-to-face

interview surveys typically re- quire a substantial length of time in the field. In contrast,

telephone interviews can be completed in very little time, within a matter of days.

Availability of staff and facilities

Self- administered mail surveys require the fewest facilities and can be completed by a small

staff. Face-to-face or telephone interview surveys are most easily conducted with a large staff of

interviewers and supervisors. And ideally, telephone surveys are conducted from a central

location with sufficient office space and telephone lines to accommodate a staff of interviewers,

which need not be large.

The present study focuses on understanding various perspectives on sex education of two

different generations. One sample ranges from 19-25 years and the other from 35-45 years.


In the sex education survey, Quota Non Probability Sampling is used. The aim of Quota

Sampling is to sample reflecting proportions of population in different categories or quotas (e.g.

gender, age, ethnicity). Non-probability sample – does not involve random selection and

methods are not based on the rationale of probability theory. When we want to study some

special groups non probability sampling is used or when we want to see some trends in the

market which acts as a pioneer study. It is also used because of the practical contingency of time,

money and energy.


Like all social research people can conduct surveys in ethical or unethical ways.

1. Invasion of privacy: A major ethical issue in survey research is the invasion of privacy.

Survey researchers can intrude into a respondent’s privacy by asking about intimate

actions and personal beliefs. Respondents have a right to privacy and can decide when

and to whole they want to reveal personal information to. Respondents are likely to share

personal information in a comfortable context with mutual trust, when they believe

serious answers are required for legitimate research purposes and when they believe

answers will remain confidential. Researchers should treat all respondents with dignity

and reduce discomfort. They are also responsible for protecting the confidentiality of the


2. Informed consent: The second issue involves voluntary participation by respondents.

Respondents agree to answer questions and can refuse to participate at any time. They

give “informed consent” to participate in research. Researchers depend on respondent’s

voluntary cooperation so researchers need to ask well developed questions in a sensitive

way, treat respondents with respect and be very sensitive to confidentiality.

3. Pseudo surveys: The third ethical issue is the exploitation of surveys and pseudo surveys.

Because of its popularity, some people use surveys to mislead others. A pseudo survey is

when someone uses a survey format to persuade a person to indulge in a activity or do a

particular task and is not interested in gathering any information from the respondent.

4. Poorly designed surveys: Fourth ethical issue occurs when people misuse survey results

or use poorly designed or purposely rigged surveys. Those who design surveys may lack

sufficient training to conduct a legitimate survey. Policy decisions made based on

careless or poorly designed surveys may result in waste and human hardship. Such

misuse makes it important that legitimate researchers conduct methodologically rigorous

survey research.

5. A very important ethical issue arises when mass media reporting of survey results and the

quality of surveys being reported permits abuse. Researchers need to include details

about the survey to reduce the misuse of survey research and increase the questions about

surveys that lack such information. Over 88 percent of reports on surveys in mass media

fail to reveal the researcher who conducted the survey and only 18 percent provide details

on how the survey was conducted. (Neuman, 2006)





MCGAUGHEY, F. (2018). Irish Travellers and Teenage Pregnancy: A Feminist,

Cultural, Relativist Analysis. In Kamp A. & McSharry M. (Eds.), Re/Assembling the Pregnant

and Parenting Teenager: Narratives from the Field (pp. 173-194). Bern: Peter Lang AG.

Retrieved from

Marsman, J., & Herold, E. (1986). Attitudes toward Sex Education and Values in Sex

Education. Family Relations, 35(3), 357-361. doi:10.2307/584361

S. Anandhi. (2007). Sex Education Conundrum. Economic and Political Weekly, 42(33),

3367-3369. Retrieved from

Oettinger, G. (1999). The Effects of Sex Education on Teen Sexual Activity and Teen

Pregnancy. Journal of Political Economy, 107(3), 606-644. doi:10.1086/250073

Pearson, C. (1979). Sex Education: A Survey of Parents with Deaf Adolescents.

American Annals of the Deaf, 124(6), 760-764. Retrieved from

Scales, P., & Kirby, D. (1983). Perceived Barriers to Sex Education: A Survey of

Professionals. The Journal of Sex Research, 19(4), 309-326. Retrieved from

Fitz-Gerald, D., & Fitz-Gerald, M. (1976). Sex Education Survey of Residential Facilities

for the Deaf. American Annals of the Deaf, 121(5), 480-483. Retrieved from

Malfetti, J., & Rubin, A. (1968). Sex Education: Who Is Teaching the Teachers? The

Family Coordinator, 17(2), 110-117. doi:10.2307/583248

Tjaden, P. (1988). Pornography and Sex Education. The Journal of Sex Research, 24,

208-212. Retrieved from

Wiechmann, G., & Ellis, A. (1969). A Study of the Effects of "Sex Education" on

Premarital Petting and Coital Behavior. The Family Coordinator, 18(3), 231-234.


Sabia, J. (2006). Does Sex Education Affect Adolescent Sexual Behaviors and Health?

Journal of Policy Analysis and Management, 25(4), 783-802. Retrieved from

Bass, M. (1974). Sex Education for the Handicapped. The Family Coordinator, 23(1), 27-

33. doi:10.2307/582520

Drazenovich, G. (2015). Queer Pedagogy in Sex Education. Canadian Journal of

Education / Revue Canadienne De L'éducation, 38(2), 1-22.


Buston, K., Wight, D., & Scott, S. (2001). Difficulty and Diversity: The Context and

Practice of Sex Education. British Journal of Sociology of Education, 22(3), 353-368. Retrieved


Kirkendall, L., & Hamilton, A. (1954). Current Thinking and Practices in Sex Education.

The High School Journal, 37(5), 143-148. Retrieved from

Libby, R. (1970). Parental Attitudes toward High School Sex Education Programs. The

Family Coordinator, 19(3), 234-247. doi:10.2307/582026

Lavrakas, p. J., Krosnick, J. A., & Visser, P. S. (2013). Survey Research.

Leepson. (2002). Sex Education.

Lickona, T. (1968). where sex education went wrong. Educational Leadership, 84-89.

Mare, J. D. (2011).

Neuman, W. L. (2011). Social Research Methods: Qualitative and Quantitative Approaches, 7/e. Chapter
11 . Pearson Education by Allyn & Bacon.

Oettinger, G. S. (1999). The Effects of Sex Education on Teen Sexual Activity and Teen Pregnancy.

Zimmerman , J. (2015). Too Hot to Handle : A Global History of Sex Education. Princeton: Princeton
University Press, Princeton.

Research Design and Data Collection Part II. (n.d.).

Slyer. (2000).