Sie sind auf Seite 1von 12

SAMPLING ERROR AND SELECTING INTERCODER RELIABILITY SAMPLES FOR NOMINAL CONTENT CATEGORIES

By Stephen Lacy and Daniel Riffe


This study views intercoder reliability as a sampling problem. It develops a formula for generating sample sizes needed to have valid reliability estimates. It also suggests steps for reporting reliability. The resulting sample sizes will permit a knozun degree of confidence that the agreement in a sample of items is representative of the pattern that would occur if all content items were coded by all coders. Every researcher who conducts a content analysis faces the same question: How large a sample of content units should be used to assess the level of reliability? To an extent, sample size depends on the number of content units in the population and the homogeneity of the population with respect to variable coding complexity. Content can be categorized easily for some variables, but not for other variables. How does a researcher ensure that variations in degree of difficulty are included in the reliability assessment? As in most applications involving representativeness, the answer is probability sampling, assuring that each unit in the reliability check is selected randomly.' Calculating sampling error for reliability tests is possible with probability sampling, but few content analyses address this point. This study views intercoder reliability as a sampling problem, requiring clarification of the term "population." Content analysis typically refers to a study's "population" as all potentially codabte content from which a sample is drawn and analyzed. However, this sample itself becomes a "population" of content units from which a sample of test units is randomly drawn to check reliability. This article suggests content samples need to have reliabilify estimates representing the population. The resulting sample sizes will permit a known degree of confidence that the agreement in a sample of test units is representative of the pattem that would occur if all study units were coded by al! coders. Reproducibility reliability is the extent to which coding decisions can be replicated by different researchers.- In principle, the use of multiple independent coders applying the same rules in the same way assures that categorized content does not represent the bias of one coder. Research methods texts discuss reliability in terms of measurement error resulting from problems in coding instructions, failure of coders to achieve a common frame of reference, and coder mistakes.-* Few texts or Background

Stephen Lacy is a professor in the Michigan State University School of journalism, and l&MC Quarterly Daniel Riffe is professor m the E.W.Scripps School of journalism at Ohio niversity. The J^.'*,^'!^^ authors thank Fred Fico for his comments and suggestions. 963-973

SAMPUNG ERROR AND SEIICUNC (hTiKCODER RELMBiLm- SAMPLES FOR NOMINAL CONTEOT CATEGORIES

studies address whether the content units tested represent the population of items studied.-* Often, reliability samples have been selected haphazardly or based on convenience (e.g., the first 50 items to be coded might be used). Research texts vary in their approach to sampling for reliability tests. Weber's^^ only pertinent recommendation is that "The best test of the clarity of category definitions is to code a small sample of the text." Stempel concludes that reliability estimates "should be based on several samples of content from the material in the study"'' and that a "minimum standard would be the selection of three passages to be coded by all coders,"" Wimmer and Dominick" urge analysts to conduct a pilot study on a sample of the "content universe" and, assuming satisfactory results, then to code the main body of data. Then a subsample, "probably between 10% and 25%," should be reanalyzed by independent coders to calculate overall intercoder reliability, Kaid and Wadsworth'' suggest that "levels of reliability should be assessed initially on a subsample of the total sample to be analyzed betre proceeding with the actual coding." How large a subsample? "When a very large sample is involved, a subsample of 5-7 percent of the total is probably sufficient for assessing reliability." Most texts do not discuss reliability in the context of probability sampling and the resulting samplingerror, Singletary has noted that reliability checks introduce sampling error when probability samples are used.'" Krippendorf argues that probability sampling to gei a representative sample is not necessary," Yet early inquiries into reliability testing did address probability sampling. Scott's'^ article introducing his pi included an equation accounting for sampling error, though that component was dropped from subsequent references to ;)/ in statistics and content analysis texts, Cohen'^ discussed sampling error while introducing kappa. An early article by Janis, Fadner, and Janowitz'' comparing reliability of different coding schemes provided reliability coefficients with confidence intervals. Schutz''' dealt with measurement error and sample size. He explored the impact of "chance agreement" on reliability measures: i.e., some coder agreements could occur by chance, though the existence of coding criteria reduces the influence chance could have.'^ Schutz offered a formula that enabled a researcher to set a minimal acceptable level of reliability and then compute the level that must be achieved in a reliability test to account for chance agreements. The formula allows the researcher to be certain that the observed sample reliability level is high enough that, even if chance agreement could be eliminated/^ the "remainder" level of agreement would exceed the acceptable level, For example, if the minimal acceptable level of agreement is 807c,, the researcher might need to achieve a level as high as 837<. in the reliability test in order to control for chance agreement, Schutz incorporated sampling error into his formula, but the article concentrated on measurement error due to chance agreement.

Sampling Error and Estitnating Sample SizeA Formula 964

The goal ofthe following analysis is to generate a formula for estimating simple random sample sizes for reliability tests. The formula can be used to generate samples with confidence intervals that tell researchers if the minimal acceptable reliability figure has been achieved. For example, if the reliability coefficient must equal or exceed .80 to be acceptable, the sample used must havea confidence interval that does not dip below ,80. If, in a given test, the confidence interval does dip below .80, the researcher cannot
UiM & MASS CCMAUNCXTION QiMjrreKiy

FIGURE 1 Why Reliahility Confidence Interval Uses a One-Tailed Test


Minimal acceptiibic

0%

80%

ya"...

I 100%

y^5% -h5'l^o\ Confidence Interval Continuum for level of agreement in coding decisions Relevant area for determining acceptability of reliability test.

conclude that the "true" reliability of the populafion equals or exceeds the minimal acceptable level. The reason for a one-tailed confidence interval is illustrated in Figure 1, The minimal acceptable agreement level is 80%, and the sample level of agreement is 90%, The resulting area of concem is the gray area between 90% and 80%, which involves the negafive side of the confidence interval, A researcher'sconclusionofacceptablereiiability is not affected by whether the population agreement exceeds 5% on the posifive side because acceptance is based on a minimal standard, which would fall on the negative side of the interval. For simplicity, this analysis uses "simple agreement" (total agreements divided by total decisions) with a dichotomous decision (the coders either agree or disagree). Survey researchers use the formula for standard error of proporfion to estimate a minimal sample size necessary to infer to the population at a given level of confidence. A similar procedure is used here. We start with the equation for the standard error of prop()rfion and add the finite populafion correcfion (FPC), The FPC is used when the sample makes up 10'^ or more of the population, lt reduces the standard error but is often ignored because it has little impact when a sample has a small proporfion of the population. The resulting formula is: SE = /PQ7 V I-1 ,^^ V N-l (Equafion 1)

But with the radical removed and the distributive property applied, the formula becomes: (Equafion 2) Where N - the population size (number of content units in the study),
SAMPUNG ERROR AND SELECTINC imERcoaER REUAMITY SAMPLES OR NOMINAL CoNrcm- CVLGORIES 965

P ^ the population level of agreement,andQ^(l-P). Andfi=samplesizefor the reliability check. Equation 2 allows the researcher to solve for n, which represents the number of test units. In order to solve for n, the researcher must follow five steps: Step 1. The first step is to determine V (the number of content units being studied)."^ It usuallyhasbeen determined before reaching the point of checking for the reliability of the instrument. Step 2. The researcher must determine the acceptable level of probability for estimating the confidence interval. We assume most content analysts will use the same levels of probability for the sampling error in intercoder reliability checks as are used with most sampling error estimates; i.e., 95% (p=.O5) and 99% (p=.Ol) levels of probability. Step 3. Once the acceptable probability level is determined, the formula for confidence intervals is used to calculate the standard error (SE). The formula is: Confidence interval probability = Z (SE) (Equation 3) Z is the standardized point on the normal curve that corresponds with the acceptable level of probability. Step 4. The researcher must set a minimal level of intercoder reliability for the test units. Content analysis texts warn that an acceptable level of intercoderreliability should reflectthenature and difficulty of categoriesand content. '^ For example, a minimum level of 80% simple agreement is often used with new coding procedures, a level consistent with minimal requirement recommendations by Krippendorf and the analysisof Schutz.^' But this level is lower than recommended by others.^' Step 5. The level of agreement in coding all study units (P) must be estimated. This is the level of agreement among all coders if they coded every content unit in the study. This step is the most difficult step because it involves estimating the unknown population reliability figure. Two approaches are possible. The first is to estimate P based on a pretest of the coding instrumentand on previous research. ThesecondistoassumeaPthat exceeds the minimal acceptable reliability figure by a certain level. The second approach creates the question: How many percentage points above the minimal reliability level should P be? For this analysis, it will be assumed that the population level should be set at 5 percentage points above the minimal acceptable level of agreement. For example, if the minimal acceptable reliability figure is .8, then the assumed P would be .85. Five percentage points is useful because it is consistent with a confidence interval of 5%. If the reliability figure equals or exceeds .85, chances are 95 out of 100 that the population (content units in the study) figure equals or exceeds .80. Once the five steps have been taken, the resulting figures are plugged into Equation 2 and the number of units needed for the reliability test is determined. A Simulation Assume an acceptable minimal level of agreement of 85% and P of 90% in a study using 1,000 content units (e.g., newspaper stories). The desired level of certainty is the traditional ;; ^.05 level. Using the normal curve, we find that the one-tailed Z-score" associated with .05 is 1.64. Then we solve for standard error (SE), using the formula:
JOURNALISM & MASS CoMMUNiCAnoN QLMRTERLY

966

Confidence interval = Z (SE)

(Equation 3)

Our example confidence interval is 5% and our desired level of probability is 95%. So, .05 = 1.64 (SE) or, SE-,05/1.64-.03 Recall that our formula for sample size begins with SE, SE = V H-l and becomes '-1)(SE)^ + PQN {Equafion 2)

V N-i

(Equation 1)

Now we can plug in our numbers and determine how large a random sample we will need to achieve at minimum the standard 85% reliability agreement, with 1,000 study units and an assumed true agreement level of 90%. Thus, PQ - .90 (.10) or ,09, Our confidence interval is .05, and the resulfing SE at p - .05 confidence level was .03, squared to .0009. So Equation 2 looks like n - (999)(.0009) + .09(1000) - 90.899 = 91,9 (999)(,009) + ,9 0.989 In other words, if we achieve at least 90% agreement in a simple random sample of 92 test units (rounded from 91.9) taken from 1,000 study units, chances are 95 out of 100 that 85"/" or better agreement would exist if all study units were coded by all coders and reliability measured. Table 1 solves Equation 2 for n with three hypothefical levels of P (85%, 90%, and 95%) and with numbers of study units equal to 100,250,500,1,000, 5,000, and 10,000. The sample sizes are based on confidence interval with 95% probability. The table demonstrates how higher P levels and smaller numbers of study units affect the test units needed. However, the number of test units needed decreases much faster with higher levels of P than with the decline in the number of study units.-^ Table 2 assumes the same agreement levels as Table 1. However, Table 2 presents numbers of test units for 99% level of probability. The figures for a given number of study units and agreement level are higher in Table 2 because they represent the increased number of test units needed to reach the higher level of probability. The main problem in determining an appropriate sample of test units is estimafing the level of P. The higher the assumed percentage, the smaller will be the sample. This might produce an incentive to overestimate this level because it would reduce the amount of work in the reliability test. Assuming a study unit level of 5 percentage points above the minimal level will control for this incentive because the higher the assumed level, the higher will be the minimal acceptable level of reliability. A problem can occur if the level of agreement in the test units
SAMPUNC ERROR AND SELECTING matcoDEH REiJABunr SAMPLES FOR NOMINAL ComuirCAizcomES 96/

TABLE 1 Number of Content Units Needed for Reliability Test, Based on Various Population Sizes, Three Assumed Levels of Population ntercoder Agreement, and a 95% Level of Probability Assumed Level of Agreement in Population (Study Units) Population Size (Sfudy Units)
10.000 5,000 l,Oi)O 85"X> 90% 95%

11 4 139 125 111 91 59

100
99 92 84 72
51

54 54 52 49 45
36

500 250 100

Note: The numbers are taken from the equation for standard error of proportions and are adjusted with the finite population adjustment. The standard error was used to find a sample size that would have sampling error equal to or less than 5% for the assumed population level of agreement. The equation is S.E. = /PxQ X where P = percentage of agreement in population, Q = (1-P), N = the population size, and n = the sample size. generates a confidence interval that does dip below the minimal acceptable level of reliability. For example, if the test units' reliability level equals .86 minus .05, the confidence interval dips below the minimal acceptable level of .85. This indicates that reliability figure for the population of study units might not exceed the acceptable level of .85. Under this condition, the researcher could randomly select more content units for the reliability check or accept a lower minimal level of agreement, say .80. If the first approach is used, the larger sample size can be determined by plugging the test units' reliability level (.86) into Equation 2 as P. Additional units could be randomly selected and added to the original test units to calculate a new reliability figure and confidence interval based on a larger sample.

Limitations of the Analysis

This analysts may seem limited because it is: (a) based on a dichotomous decision, (b) with two coders, and (c) it uses a simple agreement measure of reliability. However, the first two are not limitations. Sampling error is not affected by the number of coders, who introduce measurement error after the reliability sample is selected. Neither is usinga dichofomous decision a problem. Equation 2 would easily fit nominal content with more than two categories.'"* However, the impact of more complex coding schemes might affect the representativeness of a reliability sample if some of the categories occur infrequently. These infrequent categories have less likelihood of being in the sample, which means the full range of categories has not been tested. If this is the case, as
JOURNAUSM & MASS COMMUNKATION QuARTEniy

968

TABLE 1 Number of Content Units Needed for Reliability Test, Based on Various Population Sizes, Three Assumed Levels of Population Intercoder Agreement, and a 99% Level of Probability Assumed Level of Agreement in Population (Study Units) 85% 90% 95% Population Size (Study Units) 10,000 5,000 1,000 500 250 100 271 263 218 179 132 74 193 190 165 142 111 67 104 103 95 87 75 52

Note: The numbers are taken from the equation for standard error of proportions and are adjusted with the finite population adjustment. The standard error was used to find a sample size that would have sampling error equal to or less than 5"/.. for the assumed population level of agreement. The equations is S.E, = / F X Q X /

V n-1

V N-1

where P = percentage of agreemt-nt in population, Q = (l-P), N = the population size, and n = the sample size. discussed in note 11, the researcher should randomly stratify the test units, select a larger number of test units, or both. Equation 2 is limited, however, to nominal data because it is based on the standard error of proportions, A parallel analysis to this one for interval and ratio level categories could be developed using the standard error of means. The use of simple agreement in reliability test is not a problem either. At least three other measures of reliability, besides agreement among coding pairs,areavailablefornomina] level data, Theseare Scott's pi,-^ Krippendorf's alpha,-*" and Cohen's kappa.^~ Several discussions of the relative advantages and disadvantages of these measures are available,-" These three measures were developed to deal with measurement error due to chance and not with error introduced through sampling. The representativeness of a sample of test units is not dependent on the test applied.

Some beginning researchers might struggle with the task of making assumptions and solving the equations. If this is the case, the two tables can be useful for selecting a sample of test units to establish equivalence reliability. First, the researcher should start by selecting the level of probability appropriate for the study. If 95%, use Table 1; if 99% use Table 2. Second, if the variables are straightforward counting measures, such as source of newspaper stories, take the assumed agreement level among study units to be 90%. If the variables involve coding meanings of content, such as political
KHOR ANO SELECTING INTERCODER RinABiLn\ SAMPLES TOR NOMINAL COOTENT CATEOJRIES

Using the Tables

" O ?

leaning of news stories, take the assumed agreement level of 85% among study units,^^ Third, find the population size in the tables that is closest but greater than the size of the study units being analyzed. Take the number of test units from the table. For example, a researcher studying coverage of economic news in network newscasts has 425 stories from 40 newscasts selected from the previous year. Variables involve numbers of stories devoted to various types of economic news, Acceptinga confidence level of 95%, the researcher would look down the 907ci level of agreement column in Table 1 unfil she or he came to a population size of 500 (the closest sample size that is greater than 425). The number of units needed for the reliability check equals 84.

'y

An inevitable question from graduate sfijdents conducting their first content analysis is how many items to use in the intercoder reliability test. This arficie has attempted to answer this quesfion and to suggest a procedure for esfimating sampling error in reliability samples. Of course, for sampling error to have meaning, the sample must bea prohahility sample. The formula used here is the unbiased esfimator for simple random samples; samples based on proporfion or stratification will require adjustments available in many stafistics books,*" When reporfing reliability level, confidence intervals should be reported with both measures of reliability. Simple agreement confidence intervals can be calculated using the standard error of proportions. The confidence intervals for Scott's ;'/ and Cohen's kappa can be calculated by referring to the formulas presented in the original articles for these coefficients. The role of selecHon bias in determining reliability coefficients seems to have gotten lost since earlier explorafions of reliability. This bias can only be estimated through probability sampling. The study of content needs a more rigorous way of dealing with potential selecfion bias. Using probability samples and confidence intervals for reliability figures would help add rigor. NOTES 1. The analysis in this arficle is based on simple random sampling for reliability tests. However, under some circumstances, other forms of probability sampling, such as strafified random sampling, might be preferable for selecting reliability test samples. For example, if certain categories of a variable may make up a small proporfion of the content units being studied, the researcher might oversample these categories. 2. Reproducibility reliability, also called equivalence reliability, differs from stability and accuracy reliability. Stability concerns the same coder testing reliability of the same content at two points in fime, Accuracv reliability involves comparing coding results with some known standard. The term reliability is used here to refer to reproducibility. See Klaus Krippendorf, Content Analysis: An Introduction to ts Methodology (Beverly Hills, CA: Sage, 1980), 130-32. 3. Guido H. Stempel III, "Content Analysis," in Research Methods in Mass Communication, ed, Guido H. Stempel III and Bruce H. Westley (Englewood Cliffs, NJ: Prentice-Hall, Inc, 1981), 127. 4. Stephen Lacy and Daniel Ri ffe, "Sins of Omission and Commission in

" ' "

ouRNAUSM & MASS COMMUNICAHON QUARTEIU

Mass Communication Quantitative Research," journalism Quarterly 70 (spring 1993): 126-32. 5. Robert Philip Weber, Basic Content Analysis, 2d ed. (Newbury Park, CA.: Sage University Paper Series on Quantitative Applications in the Social Sciences, 07-075), 23. 6. Guido H. Stempel III, "Statistical Designs for Content Analysis," in Research Methods in Mass Communication, ed. Stempel and Westley, 143. 7. Stempel, "Content Analysis," 128. 8. Roger D. Wimmer and Joseph R. Dominick, Mass Media Research: An Introduction, 3d ed. (Belmont, CA: Wadsworth, 1991), 173. 9. Lynda Lee Kaid and Anne Johnston Wadsworth, "Content Analysis," in Measurement of Communication Behavior, ed. Philip Emmert and Larry L. Barker (NY: Longman, 1989), 208. 10. Michael Singletary, Mass Communication Research (NY: Longman, 1994), 297. 11. Krippendorf argues that reliability samples "need not be representative of the population characteristics" but "must be representative of all distinctions made within the sample of data at hand" (emphasis in original). He suggests purposive or stratified sampling to ensure that "all categories of analysis, all decisions specified by various forms of instructions, are indeed represented in the reliability data regardless of lioiofrequently they may occur in the actual data" (emphasis added). See Krippendorf, Content Analysis, 146. If a researcher suspects that some variable categories will occur infrequently in a simple random sample for a reliability check, disproportionate sampling of the less frequent categories would be useful. Frequency of categories could be estimated by a pretest and different sampling rates could be used for categories that appear less frequently. When figuring overall agreement for reliability, the results for particular categories would have to be weighted to reflect the proportions in the study units. This procedure might create problems when content has infrequent categories that are difficult to identify. It could require quota sampling, or selecting and checking content units for these infrequent categories until a proportion of the test units equals the estimated proportion of the infrequent categories. Another way of handling infrequent categories would be to increase the reliability test sample size above the minimum recommended here. Larger samples will increase the probability of including infrequent categories among the test units. If the larger sample does not include sufficient numbers of the infrequent categories, additional units can be selected. This will, of course, lead to coding of additional units from categories that appear frequently, but the resulting reliability figure will be more representative of content units being studied. No one would argue that all variables need to be tested in a reliability check, but a large number of categories within a variable (e.g., a twenty-sixcategory scheme for coding the variable "news topic") could create logistical problems. Just generating a stratified reliability sample that would include sufficient numbers of units for each of these categories would be time consuming and difficult. Some would question whether the logistical problems outweigh the potential impact of such a "micro" measure of reliability on the overall validity of the data. 12. William A. Scott, "Reliability of Content Analysis: The Case of Nominal Scale Coding," Pulylic Opinion Quarterly 19 (fall 1955): 321-25. 13. J.A. Cohen, "Coefficient of Agreement for Nominal Scales," EducaSAMPIJNG ERROR AND SELEOWJG J^frERCOD^^ REUABSIFY SAMPLES FOR MIM/N/U. Civrew C^ATCORIES "71

tional and Psychological Measurement 20 (1960): 37-46, 14. Irving L. Janis, Raymond H. Eadner, and Morris Janowitz, "The Reliability of a Content Analysis Technique," Public Opinion Quarterly 7 (summer 1943): 293-96. 15. William C. Schutz, "Reliability, Ambiguity and Content Analysis," Psychological Rtim-w 59 (1952): 119-29, 16. Presumably, these chance agreements could lead content analysts to overestimate theextent of coder agreement due to the precision of the coding instrument. In effect, Schutz sought a way to control for the effect of those chance agreements. But just because chance could affect reliability does not mean it dtes, and the "odds" of agreement through randomness change once a coding criterion is introduced and used, 17. But of course it can't. Its effect can only be acknowledged and compensated for. 18. Strictly interpreted, N equals the number of coding decisions that will be made by each coder. If reliability is checked separately for each coding category in the content analysis, then N equals the total number of units selected for the content analysis, Ifthe reliability ischecked for total decisions made in using the coding procedure, N equals the number of units analyzed mulfiplied by the number of categories being used. This analysis assumes that each variable is checked and reported separately, which means N equals number of content units in the populafion. 19. This advice, while sound, adds a bothersome vagueness to content analysis. This is a bit like a professor's response that the length of an essay should be "as long as it takes." How long is a piece of string? 20. Krippendorf, Content Analysis, recommends generally using the .8 level for intercoder reliability, although he says some data with reliability figures as low as ,67 could be reported for highly speculative conclusions. It is not clear whether Krippendorf's agreement level figures are for simple agreement among coders or for some other reliability measure. Schutz's ("Reliability, Ambiguity and Content Analysis") analysis starts with the ,8 level of simple agreement. This analysis will use .80 to remain consistent with Schutz. 21. See Singletary (Mi7ss Communication Research, 296) who states that a Scott's ;)/ of ,7 is the consensus val ue for the sta tistic. Under some condifions this would be consistent with a simple agreement of ,8, but not always. Wimmer and Dominick (Mass Media Research, 181 ) report a rule of thumb of at least .9 for simple agreement and a Scott's /'/ or Krippendorf's alpha of .75 for intercoder reliabilitv. 22. Note that this is a one-tailed test. Content analysis researchers are concerned that the reliability figure exceeds a minimal level, which would be on the negafive side of a confidence interval. The acceptance of a coding instrument as reliable is not affected by whether the population reliability figure exceeds the reliability test figure on the positive side of the confidence interval. 23. Three factors affect sampling error: the size of the sample, the homogeneity of the populafion, and the proportion of the population in the sample. The last factor has little impact unless the proportion is large. As Table 1 shows, populafion size reduces the number of test units noticeably when the number of study units falls under 1,000. 24. Multiple-category variables differ from dichotomous variables because mulfipie-categories are not independent of each other. However, this lack of independence is a bias in coding and not in the selection of units for

972

JURNAiJSM & MASS CoMMUNiCAnON QiiAm^my

a reliability test. 25. Scott, "Reliability of Content Analysis." 26. Krippendorf, Content Analysis. 27. Cohen, "Coefficient of Agreement for Nominal Scales." 28. For examples, see Maria Adele Hughes and Dennis F. Carrett, "Intercoder Reliability Estimation Approaches in Marketing: A Generalization Theory Framework for Quantitative Data," journal ofMarketing Research 27 (May 1990): 185-195; and Richard H. Kolbe and Melissa S. Burnett, "Content-Analysis Research: An Examination of Applications with Directives for Improving Research Reliability and Objectivity,"/Di/-mi/dfConsiWr Research 18 (September 1991): 243-250. 29. Coding simple content, such as numbers of stories, typically yields higher levels of reliability because cues for coding are more explicit. The population agreement will be higher than coding schemes that deal with word mearungs. A lower reliability figure is an acceptable trade off for studying categories that concern meaning. 30. For example, see C. A. Moser and G. Kalton, Survey Methods in Social Investigations, 2d ed. (NY: Basic Books, 1972).

SAMPU\G ERSOR AND SEifcriNc UTERCODEM REUABIUTY SAMPISS FOR NOMINAL COKTENI CAiTCi>ms

973

Copyright of Journalism & Mass Communication Quarterly is the property of Association for Education in Journalism & Mass Communication and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use.

Das könnte Ihnen auch gefallen