Beruflich Dokumente
Kultur Dokumente
Immersions
Objectives of the Session
At the end of the session, the participants should be able
to:
Discuss the importance of the curriculum guide
Explain how to use the curriculum guide in planning for
instruction
Explore Grade 12 Inquiries, Investigations and Immersion
Curriculum guide and instructional materials
Give sample learning activities for the learning
competencies of Inquiries, Investigations and Immersion
DEPARTMENT OF EDUCATION
What is research?
Research is
- A study/investigation
- A scientific investigation
- Is a study on investigation which is done
systematically, empirically, scientifically,
and logically for the purpose of achieving
knowledge and helping solve situational
problems.
Characteristicsof a Research
Process
- Systematic - well defined designs, an
orderly procedure
- Empirical – measurable and observable
things or phenomenon that you can put in
print on the bases of your senses.
- Scientific – can be tested
- Logical – justifiable and acceptable by
reason
Purpose of Research
(Theories/Principles)
Skills and abilities Pure/Basic research
necessary in (Idealistic)
conducting Solutions to
Research/Scientific problems
Investigation (Social
Responsibility
Input Output
System Framework of research
Aims at developing a person to be-
ENVIRONMENT • Sensitive to
surroundings
Social • Systematic
Political • Critical
Economic • Objective
Educational • Logical
Technological • Rational
Physical • Analytical
CriticalResearcher- has the “3rd
eyes”, seeks the truth from what he
reads, does not take them hook-line
and sinker, does not jump into
conclusions. Treat opinions as
opinions
Begin with a
TOPIC in
mind
10
TOPIC
Relevant
Significant
Feasible
11
Brainstorming for Research Topics
1. Scheduling 7. Field trips
2. Team teaching 8. School facilities
3. Evaluation of 9. Extracurricular
learning, programs
reporting to 10. Uses of ICT in
parents Instruction
4. Student 11. Stress
regulation management
5. Learning styles 12.Guidance-
6. Peer Tutoring counseling
programs
I. Brainstorming for Research
Topics
Key Questions:
a.What do I know about the
topic?
b.What should I know about the
topic?
c.What do previous studies say
about my chosen topic?
II. Identifying the Problem
and Asking the Question
Example:
LITERATURE REVIEW
STEP 1a: Literature Review: The Research
Powerhouse
Compare &
Analyze contrast Determine Expand Judge Reflect
Argues Conclude Discuss Explain Justify Refer to
Criticize Distinguish Exhibit Narrate Relate to
Assess
Debate Differentiate Identify Outline Report
Assert Defend Evaluate Illustrate Persuade Review
Assume Define Emphasize Imply Propose Suggest
Claim Demonstrat Examine Indicate Question Summarize
e
Table 3. Forming critical sentences using signaling words
As a consequence of x then y
Consequently, …
Hence …
Therefore, …
Thus …
In short …
In effect …/ It follows that …
This indicates that …
This suggests that …
This points to the conclusion that …
This most obvious explanation is …
This means that …
Finally, …
Source: Brown and Keeley (2004)
Writing the Literature
Review
Writing the Theoretical Background
(The SEC Approach)
LITERATURE REVIEW
Rule 1: Synoptic Dimension
Literature 2
Finding 1
Finding 2
Finding 3
Finding 4
Literature 3
Finding 1
Finding 2
The Need for
Finding 3 Dendrogramming
Literature 4
Finding 1
Finding 2
Finding 3
Literature 5
Finding 1
Finding 2
Finding 3
Finding 4
Finding 5
Example write-up (CF)
The conceptual framework underlying this study is
anchored on the concepts of research capability,
workload, and research productivity.
Research Capability
Research capability is simply the capability of the
faculty to undertake research. All the resources or
inputs which enable the faculty member to conduct
research are considered as components of research
capability (Deza, 1999; Banaag, 1994). Salazar-
Clemena and Almonte-Acosta (2007) enumerated
indicators of research capability which include budget
for research, the ability to obtain research grants, the
provision of research infrastructure, the ability to
collaborate with and access to research professionals,
and the presence of rules and procedure on the
granting of rewards for research.
Example write-up (CF)
In this study, research capability is described in terms
of technical skills in doing research, skills in
conceptualizing a research problem, knowledge and
skills in designing the research plan, knowledge and
skills on research data processing, and knowledge and
skills in writing the research paper. Technical skills
include written communication (expressing one’s
ideas and arguments using language rules, presenting
and packaging ideas effectively); oral communication
(expressing one’s ideas and arguments using
language rules, presenting and packaging ideas
effectively); critical /analytical thinking (evaluating
ideas, analyzing the arguments of others); problem-
solving; research organization (parts, format of a
research paper); online search , use of electronic
resources, databases & search engines; use of
computer commands/programs/ software; and
acknowledging or citing sources/ cross-referencing.
Example write-up (CF)
Determinants of Research Productivity
Previous foreign and local studies have revealed
that the reasons for low research productivity
among faculty members are poor or lack of
research skills (Anunobi & Emerole, 2008; Iqbal,
2011); lack of research funds (Anunobi &
Emerole, 2008; Iqbal, 2011; Mahilum, 2010);
and heavy workload or teaching overload (Iqbal,
2011; Mahilum, 2010; Mordeno, 2002). Iqbal
(2011) added performance of administrative
duties along with academic duties, nonexistence
of research leave, negative attitude of the faculty
towards research and absence of professional
journals while Anunobi & Emerole (2008)
included time constraints as impediments to
research publication.
Example write-up (CF)
Determinants of Research Productivity
Predictors of research productivity include
teachers training or having research
orientation (Finkelstein, 1984, Banaag, 1994,
Mordeno, 2002); academic rank (Flanigan, et
al.,1988; Banaag, 1994); highest educational
attainment (Finkelstein, 1984; Flanigan, et
al.,1988; Banaag, 1994);and sufficient time
allocated to research (Finkelstein, 1984).
Example write-up (CF)
While several studies have been made to investigate
correlates of research productivity, studies on
research capability in terms of specific research
skills of teachers were lacking. In this end, the
researchers were motivated to conduct this research
that explored the levels of proficiency of teachers on
different skills that determine their capability in doing
research and how this capability can be associated
to research productivity. Workload in terms of hours
of work and number of teaching preparations was
also investigated to verify its impact on faculty
productivity in research. In the end, it is aimed that
this research may contribute to the existing
literatures on determinants of research productivity.
Read enough background material to
discuss the research and the theory
giving a reasonably complete account of
our knowledge of the topic
Present data that are based on data and
theory, including conflicting views of
different researchers.
Make it easy for the reader to
understand how all of the studies
interrelate.
Remember!
Writing the Introduction
(The TIOC Approach)
Presenting a Statistics
Plagiarism
Table 6.1. Basic Citation Styles
Type of citation First citation in Subsequent Parenthetical Parenthetical
text citations in text format, first format,
citation in text subsequent
citations in text
One work by Walker (2007) Walker (2007) (Walker, 2007) (Walker, 2007)
one author
One work by Walker and Walker and (Walker & (Walker &
two authors Allen (2004) Allen (2004) Allen, 2004) Allen, 2004)
One work by Walker, Allen, Walker et al. (Walker, Allen, (Walker et al.,
five authors Bradley, (2008) Bradley, 2008)
Ramirez, and Ramirez, & Soo,
Soo (2008) 2008)
References
1. Research Design
A research design is a plan or strategy
in order to answer the research problem
and control (variance) for validity. This is
the over-all plan for the conduct of the
investigation.
Hence, substantially a design is
intended to answer the problem; and,
technically it provides control for validity.
Understanding Ways to Collect
Data
1. Research Design
Essentially, research designs may be
classified only in two (2) categories on the
basis of maximum control for validity:
1. non-design or non-experimental
(descriptive)
2. True Design or experimental design
Experimental Research
Use matching when necessary
Use subjects as their own controls
(treat same group first in control
condition then in treatment OR use
pre-test/posttest on same group)
Use analysis of covariance to
statistically equate unequivalent
groups
Experimental Research
Weak Designs(Pre experimental Designs)
Experimental Research
(Group Designs)
Pre-Experimental Designs
Do not adequately control for the problems
associated with loss of external or internal
validity
Cannot be classified as true experiments
Often used in exploratory research
Three Examples of Pre-Experimental Designs
◦ One-Shot Design
◦ One-Group Pretest-Posttest Design
◦ Static Group Design
One-Shot Design
A.K.A. – after-only design
A single measure is recorded after the treatment
is administered
Study lacks any comparison or control of
extraneous influences
No measure of test units not exposed to the
experimental treatment
May be the only viable choice in taste tests
Diagrammed as: X O1
One-Group Pretest-Posttest
Design
Subjects in the experimental group are
measured before and after the treatment
is administered.
No control group
Offers comparison of the same individuals
before and after the treatment (e.g.,
training)
If time between 1st & 2nd measurements is
extended, may suffer maturation
Can also suffer from history, mortality, and
testing effects
Diagrammed as O1 X O2
Static Group Design
A.K.A., after-only design with control group
Experimental group is measured after being exposed to the
experimental treatment
Control group is measured without having been exposed to
the experimental treatment
No pre-measure is taken
Major weakness is lack of assurance that the groups were
equal on variables of interest prior to the treatment
Diagrammed as: Experimental Group X O1
Control Group O2
Pretest-Posttest Control
Group Design
A.K.A., Before-After with Control
True experimental design
Experimental group tested before and after
treatment exposure
Control group tested at same two times without
exposure to experimental treatment
Includes random assignment to groups
Effect of all extraneous variables assumed to be
the same on both groups
Do run the risk of a testing effect
Diagrammed as
R
◦ Experimental Group: O1 X O2
R
◦ Control Group: O3 O4
Effect of the experimental treatment equals
(O2 – O1) -- (O4 – O3)
Diagrammed as
◦ Experimental Group:
R X O1
◦ Control Group: R O2
Effect of the experimental treatment equals
(O2 – O1)
Example
◦ Assume you manufacture an athlete’s foot remedy
◦ Want to demonstrate your product is better than
the competition
◦ Can’t really pretest the effectiveness of the remedy
True experimental design
Combines pretest-posttest with control group
design and the posttest-only with control group
design
Provides means for controlling the interactive
testing effect and other sources of extraneous
variation
Does include random assignment
Correlation Research
(Predicting Outcomes Through
Association)
Correlation Research
(Predicting Outcomes
Through Association)
Explanatory studies examine relationship to
identify possible cause/effect
Relationship might or MIGHT NOT mean
causation
For causation: 1) A before B; 2) A and B
related; 3) Rule out other causes of B (need
experiment)
Prediction studies identify predictors of
criterions (i.e. HS GPA and College GPA)
The stronger the correlation the better the
prediction
Complex Correlation Techniques, such as multiple
regression allow use of several predictors for one criterion
Coefficient of multiple correlation (R) gives strength of
correlation between predictors and criterion
Coefficient of determination (r2) is amount x and y vary
together
Descriminant function analysis is for non-quantitative
criterion (predict which group someone will be in)
Other techniques also used (factor analysis, path analysis,
structural modeling)
Correlation Research
(Predicting Outcomes Through
Association)
Problem selection – usually it’s are x and y related
or how well does p predict c
Sample – random selection of at least 30
Measurement – need quantitative data
Design/Procedures – need two measures on each
subject
Data collection – usually both measures close in
time
Data analysis – correlation coefficient, r, and plot
(r is -1 to +1, and the closer to plus or minus 1, the
stronger the relationship)
Correlation Research
(Predicting Outcomes Through
Association)
General guidelines:
+.75 to +1.0 Very strong relationship
+.50 to +.75 Moderate strong
relationship
+.25 to +.50 Weak relationship
+.00 to +.25 Low to no relationship
Need .5 or better for prediction of any
use, and .65 for accurate predictions
Reliability coefficients should be .7 up
Validity coefficients should be .5 up
Correlation Research
Correlation Research
Correlation Research
Causal Comparative Research
(Ex Post Facto)
Determines cause (or effect) that has occurred and
looks for effect (or cause) from it
Start w/ differences in groups and examine them
Examples: Difference in math abilities of
male/female students
No random assignment to treatment (it already
occurred)
Associational like correlation but primarily
interested in cause/effect
IV either cannot (ethnicity) or should not
(smoking) be manipulated
Causal Comparative versus
Correlational Research
Often an alternative to experimental (faster
and cheaper)
Serious limitation is lack of control over
threats to internal validity
Need to remember the cause may be the
effect; they may only be related and there is
some other variable that is the cause
(lurker)
Both are associational (looking for relationship)
Both are often prelude to experiments
Neither involves manipulation of variables
Causal Comparative works with different groups;
correlation examines one group on different
variables
Correlation is measured w/ coefficient while
Causal comparative compares
means/medians/percents of group members
Survey Research
(Steps to conduct survey research)
Select the sample (randomly, but check to see
respondents are qualified to answer)
Pilot test can indicate likely response rate and
problems with data collection or sample
Prepare instrument (questionnaire and
interview schedule)
Appearance important - look short and easy
Clarity in questions is essential
Survey Research
(Steps to conduct survey research)
Question types (same questions need to be asked
of all respondents)
Closed ended (multiple choice) - easier to
complete, score, analyze
Categories must be all inclusive, mutually
exclusive
Open ended - easy to write, hard to analyze and
hard on respondents
Survey Research
(Steps to conduct survey research)
Population
Nonrandom/purposive - troubles
with
representativeness/generalizing
Names in a hat or table of random
numbers
Multistage sampling
Multistage sampling
Convenience Sampling
Using personal judgment to select
sample that should be
representative (i.e., this faculty
seems to represent all teachers)
OR selecting those who are known
to have needed info (interested in
talking only to those in power)
Purposive Sampling
Sample size affects accuracy of
representation
Larger
sample means less
chance of error
Sampling
Representative sample is required (not
the same thing as variety in a sample)
High participation rate is needed
Sampling
Data Collection Procedure
Data Collection Procedure
This represents the logical procedure in
collecting and treating data to answer the research
question and the hypothesis:
The usual order of presentation of this
section is chronological, for instance:
Instrumentation
(Measurement)
• Validity – measures what it is supposed to
(accurate)
• Reliability – a measure that consistently
gives same readings (repeatable)
Instrumentation
• Objectivity – absence of subjective
judgments (need to eliminate subjectivity
in measuring)
• Usability of instruments
◦ Consider ease of administration; time
to administer; clarity of directions;
ease of scoring; cost; reliability/validity
data availability
Instrumentation
Instrumentation
(Classifying Data Collection
Instruments)
• By the group providing the data
◦ Researcher instruments (researchers
observes student performance and
records)
◦ Subject instruments (subjects record
data about themselves, such as taking
test)
◦ Others/Informants (3rd party reports
about subjects such as teacher rates
students)
Instrumentation
(Classifying Data Collection
Instruments)
• By where instrument came from
◦ Preference is for existing
◦ Can develop your own (requires time,
effort, skill, testing;
• By response type
◦ Written response – preferred – objective
tests, rating checklist
◦ Performance instruments – measure
procedure, product
Instrumentation(Examples of
Data Collection Instruments)
• Researcher Completed Instruments
◦ Rating scales (mark a place on a continuum
for example numeric rating 1=poor to 5=
excellent)
◦ Interview schedules (complete scales as
interview takes place; use precoding; beware
of dishonesty)
Instrumentation(Examples of
Data Collection Instruments)
• Researcher Completed Instruments
◦ Tally sheets (for counting/recording
frequency of behavior, remarks, activities,
etc.)
◦ Flow charts (to record interactions in a room)
◦ Anecdotal records (need to be specific and
factual)
◦ Time/Motion logs (record what took place and
when)
• Item Formats
◦ Selection items or closed response (T/F;
Yes/No; Right/Wrong; Multiple choice)
◦ Supply items or open ended (short answer;
essay)
◦ Unobtrusive measures (no intrusion into
event… usually direct observation and
recording)
Instrumentation
• Types of Scores
◦ Raw scores (initial score or count
obtained…w/out context)
◦ Derived scores (raw scores translated to
meaningful usage with standardized process)
Age/Grade equivalence; Percentile ranks;
Standard scores (how far a score is from a given
reference point, i.e. z and T scores);
Which to use depends on the purpose; usually
standard scores used
Instrumentation
• Norm Referenced v. Criterion Referenced Tests
• Norm referenced scores give a score relative to
a reference group (the norm group)
◦ Criterion referenced scores determine if a
criterion has been mastered
◦ These are used to improve instruction since
they indicate what students can or cannot do
or do or do not know
Instrumentation
Instrumentation
(Measurement Scales)
• Nominal (in name only)
◦ Numbers are only name tags, they have no
mathematical value (gender: 1=male and 2=
female OR race: 1= Blk, 2=Wht, 3=other)
• Ordinal (in name, plus relative order)
◦ Numbers show relative position, but not
quantity (grade level, finishing place in a
race)
Instrumentation
(Measurement Scales)
• Interval (in name w/ order AND equal distance)
◦ Numbers show quantity in equal intervals, but an
arbitrary zero (can have negative numbers;
degrees C or F)
• Ratio (in name, w/ order, eq. distance AND absolute
zero)
◦ Numbers show quantity with base of zero where
zero means the construct is absent
• Higher levels more precise…collect data at highest
level possible; some statistics only work with higher
level data
Instrumentation
(Preparing for Data Analysis)
• Scoring data – use exact same format for
each test and describe scoring method in
text
• Tabulating and Coding – carefully transfer
data from source documents to computer
◦ Give each test an ID number
◦ Any words must be coded with numerical
values
◦ Report codes in text of research report
Types of instruments
◦ Cognitive – measuring intellectual processes
such as thinking, memorizing, problem
solving, analyzing, or reasoning
Measurement Instruments
Types of instruments (continued)
◦ Affective – assessing individuals’ feelings,
values, attitudes, beliefs, etc.
Typical affective characteristics of interest
◦ Values – deeply held beliefs about ideas, persons, or
objects
◦ Attitudes – dispositions that are favorable or unfavorable
toward things
◦ Interests – inclinations to seek out or participate in
particular activities, objects, ideas, etc.
◦ Personality – characteristics that represent a person’s
typical behaviors
Measurement Instruments
Types of instruments (continued)
◦ Affective (continued)
Scales used for responding to items on affective tests
◦ Likert
Positive or negative statements to which subjects
respond on scales such as strongly disagree, disagree,
neutral, agree, or strongly agree
◦ Semantic differential
Bipolar adjectives (i.e., two opposite adjectives) with a
scale between each adjective
Dislike: ___ ___ ___ ___ ___ :Like
◦ Rating scales – rankings based on how a subject would
rate the trait of interest
Measurement Instruments
Finding the Answers to the
Research Question
1. Interpretation of Data
Quantitative
Analysis
For descriptive problems that require
finding out “what is,” as the term implies,
descriptive statistical analysis can be
used to describe the data. The mean,
median, mode and standard deviation are
the main descriptive statistical treatment
applicable. The mean or median is used
to indicate the average while the
standard deviation provides the
variability of the data/scores in the
sample.
Descriptive Statistics
Sample of Computer Output
Sample Interpretation
Age
F %
30-32 5 6.25
27-29 43 53.75
24-26 29 36.25
21-23 3 3.75
Total 80 100
Illustration 2.
◦ Results on the table show that most of the
respondents were within the age range of
27-39 (43 or 53.75%). However it could be
seen that the combined ranges from 24-26
to 27-39 composed almost 90% of the
respondents.
Interpretation
Descriptive Statistics Used in
Evaluation Studies
Illustration
EVALUATION OF THE CONTEXTUAL
TEACHING MATERIALS BY
EXPERTS
Contents Mean Verbal Des.
Concept definition 4.6 Excellent
Presentation of concepts 4.6 Excellent
Sufficiency of Problem
scenarios and examples 5.0 Excellent
Sufficiency of questions to
ignite the critical thinking 4.8 Excellent
Writing of the topics within
to the level of the student’s
understanding 4.8 Excellent
Interpret results on the
context of the study
The concepts in the CTL were presented in
real situations that are familiar to the
students (X=4.6). This is the basic principle
strictly adhered to in a contextual teaching
approach, thus, if the materials fail in this
aspect, there is no contextual approach.
Since the experts judged the criterion as
excellent, it only means that the CTL
materials were successful in translating the
concepts to true-to-life experiences.
Inferential
Statistics
Correlation
Techniques
Bivariate Analysis
Interpreting correlation
coefficient
Illustration
Subjects being Pearson’s r Significance
Related
Mathvs.MathNEAT 0.77095 significant
Sci vs.Sci(NEAT) 0.79908 significant
Eng vs.Eng(NEAT) 0.69801 significant
HEKASI vs HEKASI 0.23142 not sig.
It is necessary to explore the statistical
significance by using the critical value,
however, it is much better to determine
whether the computed Pearson's r denotes
a high correlation between the variable
concerned because statistical
significance may only be negligible or
too low to consider. Computer
statistical outputs provide the
probability of alpha which may
indicate the percent of occurrence of
the error to reject the null hypothesis
when it is true.
As shown in the table, math
achievement is significantly related to
the result of the NEAT in mathematics
(r=.77). This means that the NEAT
results in mathematics relate to the
math achievement of the students in
school. If a pupil performs well in
school mathematics, he is likely to get
high in the NEAT.
Sample Interpretation
Test of Difference
Between Groups
The Pretest/Posttest control
group Design
Experimental grp. R O1 X O2
Control grp. R O3 O4
Pair 1
PRE 33.70 2.90 61.05 28 .000
POST
Sample of T-test Output
Independent samples
Group Statistics
Interpretation
Comparing 3 or
More Groups By
Analysis of
Variance
Illustrating an ANOVA Table
Illustration 1
Illustration 2
Is there an interaction between method
of teaching and the ability of the
students?
Solution
Use two-way ANOVA to compare
between groups and determine
interaction between variables.
Sample Problem
Is Constructivist Strategy In Teaching
Effective?
SV SS df X2 F F Prob
Group 115.70 3 38.56 6.17 0.029
Math Bck 35.00 2 17.50 2.80 0.115
Interaction 7.10 6 12.85 2.05 0.045
Error 150.10 24 6.25
Total 377.90 35
To interpret the results, observe the
probability of alpha (p-value). This will
indicate whether the result is significant or
not. Since alpha is the probability of
rejecting the Ho when it is true, its value
must be less than the targeted alpha.
Thus, the table shows that the interaction
is significant. This will be the basis for
answering the problem. If it is not
significant, it follows that the researcher
should examine the significance of the row
or column differences between the means.
Since the Interaction effect is significant,
the researcher could pinpoint in the
conclusion the observe differences. The
higher means could be used as basis for
the conclusions.