Sie sind auf Seite 1von 16

Section B

1. the media are deceiving the unwary listeners and viewers because the exit polls are only indications not scientific proof. Prove or disprove.

First, an exit poll is just a survey. Like other polls, it is subject to random sampling error, so differences of a few percentage points between the candidates in any given state sample are not terribly meaningful. Second, the networks almost never "call" truly competitive races on exit poll results alone. The decision desk analysts require very high statistical confidence (at least 99.5 percent) before they will consider calling a winner (the ordinary "margin of error" on pre-election polls typically uses a 95 percent confidence level). They usually only achieve that confidence for relatively close races after the exit pollsters obtain the actual vote results from the randomly selected precincts at which interviews were completed (and from other larger random samples of precincts) and combine all of the data into some very sophisticated statistical models. Even then, if the models project that the leading candidates are separated by just a few percentage points, as pre-election polls suggest they will be in all of the key battleground states, the networks will usually wait until nearly all votes are counted to project a winner. Third, the initial results of the exit poll interviews have had frequent problems with non-response bias, a consistent discrepancy favoring the Democrats that has appeared to some degree in every presidential election since 1988. Usually the bias is small, butin 2004 it was just big enough to convince millions of Americans who saw the leaked results on the Internet that John Kerry would defeat George W. Bush. It didn't work out that way. The resulting uproar led the networks, beginning in 2006, to hold back the data from their news media clients in a sealed quarantine room on Election Day until 5 p.m. Eastern time. The quarantine means that any numbers purporting to be "exit polls" before 5 p.m. are almost certainly bogus. To try to minimize the potential errors, the networks often weight the first official tabulations they post on their websites to an estimate of the outcome (called the

"composite") that combines responses to the exit poll interviews with the averages of pre-election polls (like those reported by HuffPost Pollster). But that process is imperfect and does not remove either the random error or the initial statistical bias that often favors Democrats. Four years ago, on Pollster.com, we gathered all of the official tabulations posted as polls closed and extrapolated the underlying estimates of the outcome for each state. When later compared against the final vote counts in each state, we found that the initial estimates had overstated Barack Obama's margins by an average of 4.7 percentage points. Those of us seeing leaked data, however, see neither the running calculations of the precinct errors nor the levels of statistical confidence associated with the vote numbers. We see only precise-looking percentages and are oblivious to the potential for error. As one pundit put it four years ago, the exit polls "have become crack cocaine for political junkies looking to score on Election Day." We would be better off, he said, if we relied on the exit polls for "their original purpose, explaining who did what and why, rather than trying to forecast what will be widely known anyway in just a few hours."

2. Explain the public opinion research tradition in media research.


The public is what fuels the media's strength in the world, and without it we'd still be stuck in prehistoric times. It governs the workings of our society and gives way to change, when these opinions are voiced. Learn about what is the importance of public opinion in media and how we influence it every single day. Media is all about connecting people and mirroring the society that we live in. Media is all about reflecting on daily happenings around the globe. People are not just the audience but also an essential element of news in itself! Be it journalism or entertainment, the increasing importance of public opinion in media is evident everywhere. Well, if you are still unaware of the increasing importance of public opinion in mass media, then just take a look around you. First of all let us talk about the print media like newspapers and magazines. The newspaper content, which was wholly and solely the product of reporters and journalists in the office, has now gone a radical change into an amalgamation of news and public opinions. The letters to the editor written by the readers are also an instance of acknowledgment of the importance of public opinion. In addition to the

readership polls, and questionnaires, there are also columns, which are contributed by the audience themselves - a phenomenon that is now known by the name of citizen journalism. In case of the World Wide Web or the Internet, the websites were already using the public polls. In addition to that, the rise of blogging and public forums has paved way to increased exposure to individual opinions and has hence strengthened the importance of public opinion on the web as well. In many news channels across the globe, public polls are a common affair and are considered as an important tool to measure public opinion. It reports public opinion about several issues and news. In case of the entertainment channels, public voting for reality shows like American Idol explains the importance given for public opinion. In case of movies and films, what would be the success of a great movie if it had no audience? Although movies are about expression of views and ideas on a particular subject - the moviemakers also considered it as their livelihood. Thus it depends on the public reviews, the word of mouth publicity of the audience which is instrumental in the success / failure of a particular movie. Importance of public opinion is the effect of increased public participation in the media. This is an era where media has awakened to the fact that no matter what they are dishing out - news, views or entertainment - they need to keep it interactive. In case of the news industry, the public opinion is not just a way to validate and supplement the news but also a great way to increase the awareness of people about a particular topic. In addition to that importance of public opinion in media, it is also a great step towards strengthening the democracy of a nation. The TRPs for television channels or the readership counts for the print media are crucial when it comes to measuring overall success of the media. Since it is the public that is instrumental in shaping the readership rankings or the TRP, the media is left with no choice but to acknowledge the importance of public opinion. It is the public opinion that decides the success of media popularity and the eventual media success.

The media is actually very powerful because it can influence and shape the perception of the public. This is the main reason why there are a lot of responsibilities involved with the press because of the big impact that they have on the minds of different individuals. But a lot of people ask, does the press or the online media truly have a great effect on millions of people? How do they influence the opinions and perception of the public? If you watch the news, you will definitely get a lot of information and updates on various fields of interests. The media has the power to present all of the reports regarding a specific event, which is the main source of information for millions of people from around the world. If you want to understand what happened during a particular event, then you can simply watch some news and media updates regarding that event.

If you take a very close look at television shows and news reports, you will probably notice that some of these outlets have a narrow message that can be etched within the minds of people. In previous years, news reports were highly limited to presenting the different facts and information surrounding a specific event. It was considered to be irresponsible for reports and news anchors to incorporate their own thoughts and ideas regarding a certain situation. But things are quite different these days. More and more anchors and news presenters already give in their own opinions and interjections, which can also strongly influence how one perceives a specific news item. Some of news reports are also based according to how the press TV wants the public to perceive a specific person. For example, a show may present all of the possibilities of a suspected murdered to be guilty, but fails to present the other side of the story. This is truly one of the most important topics that most experts discuss about media bias, press release and other media matters. On the other hand, there is another side to the story because mass media can also have a very positive effect on people. It can evoke feelings of love, national pride and patriotism, especially in situations involving national difficulties or catastrophes. Mass media definitely has a great impact on how people get the message regarding a specific news item. Based on the above points, media definitely has a great responsibility in shaping the perception and opinions of different people.

3. Critically examine the status of research application in print media.

Five Types of Print Media Research


1. Readership Used to shape content of a publication to cater to audiences' interests. Composed of five areas: 1. Reader profiles provide demographic data of the readers of a publication. This information can be used to focus the publication, prepare advertising promotions, and increase subscriptions. Management can aquire additional insights about editorial aims, target audiences and circulation goals through psychograpic studies and lifestyle segmentation research.

Psychographic studies ask readers to indicate their level of agreement or disagreement with a large number of attitudinal statements. The responses are then analyzed to see which items cluster together. Lifestyle segmentation research questions readers activities, hobbies, interests and attitudes.

2. Item-selection studies are used to see who reads what part of the publication, allowing newspapers to target certain groups of readers. Though usually accomplished through aided recall, where the interviewer and respondent go over a paper to determine which stories were read, item selection studies can also be conducted through selfadministered readership surveys and tracking studies. 3. Reader-nonreader studies attempt to determine who reads and doesn't read the publication in question (e.g. "Have you read a newspaper today or yesterday?" or "How often do you read a daily paper?"). This not only identifies groups who aren't reading, but can also determine why they aren't reading. Think about it What type of data would we get from the two questions about readership? Which question provides better data? This is useful in determining the percentages of nonreaders that the study will find. 4. Uses and gratifications studies determine the motives that lead people to read a publication (Why do you read it?). A Likert scale can be used with a list of responses indicating why people read. The responses are then summed, and an average score for each motivation item is calculated. 5. Editor-reader comparisons are used to see if editors and readers have similar answers about a topic. Studies in this area have determined that readers and editors have different perceptions of the attributes of a highquality newspaper. While magazine readership surveys are similar to newspaper research, most consumer magazines use audience data compiled by the Simmons Market Research Bureau (SMRB) and Mediamark Research Inc. (MRI).

Many magazines use reader panels which provide information about audience reactions at a modest cost. Item pretests show a random sample of magazine readers an article title, byline, and brief description of the story's content. The average ratings of the sample provide a guide for editorial decisions.

2. Circulation Used to determine characteristics of overall market (What percentage of households subscribe to the News-Sentinel?), as well as determines what impacts circulation. 3. Management Used to determine how to keep employees satisfied and productive, thus decreasing the possibilities of burnout. Results aid in goal setting by management, employee job satisfaction, and noting the effects of competition and ownership on newspaper content and quality. 4. Design and makeup Determines how design effects readership, reader preferences and comprehension (graphics, pictures, headline size, white space and any other design elements). 5. Readability Measures the extent to which readers understand an article, are able to read it with ease and find it interesting. Several formulas have been developed to objectively determine the readability of the text, including the Flesch reading ease formula.

4. Describe the process of conducting survey method in media research. Not done. 5. the design of a questionnaire must always reflect the basic purpose of the research Elucidate.
6. The design of a questionnaire will depend on whether the researcher wishes to collect exploratory information (i.e. qualitative information for the purposes of better understanding or the generation of hypotheses on a subject) or

quantitative information (to test specific hypotheses that have previously been generated). 7. Exploratory questionnaires: If the data to be collected is qualitative or is not to be statistically evaluated, it may be that no formal questionnaire is needed. For example, in interviewing the female head of the household to find out how decisions are made within the family when purchasing breakfast foodstuffs, a formal questionnaire may restrict the discussion and prevent a full exploration of the woman's views and processes. Instead one might prepare a brief guide, listing perhaps ten major open-ended questions, with appropriate probes/prompts listed under each. 8. Formal standardised questionnaires: If the researcher is looking to test and quantify hypotheses and the data is to be analysed statistically, a formal standardised questionnaire is designed. Such questionnaires are generally characterised by: 9. prescribed wording and order of questions, to ensure that each respondent receives the same stimuli 10. prescribed definitions or explanations for each question, to ensure interviewers handle questions consistently and can answer respondents' requests for clarification if they occur 11. prescribed response format, to enable rapid completion of the questionnaire during the interviewing process. 12. Given the same task and the same hypotheses, six different people will probably come up with six different questionnaires that differ widely in their choice of questions, line of questioning, use of open-ended questions and length. There are no hard-and-fast rules about how to design a questionnaire, but there are a number of points that can be borne in mind: 13. 1. A well-designed questionnaire should meet the research objectives. This may seem obvious, but many research surveys omit important aspects due to inadequate preparatory work, and do not adequately probe particular issues due to poor understanding. To a certain degree some of this is inevitable. Every survey is bound to leave some questions unanswered and provide a need for further research but the objective of good questionnaire design is to 'minimise' these problems. 14. 2. It should obtain the most complete and accurate information possible. The questionnaire designer needs to ensure that respondents fully understand the questions and are not likely to refuse to answer, lie to the interviewer or try to conceal their attitudes. A good questionnaire is organised and worded to encourage respondents to provide accurate, unbiased and complete information. 15. 3. A well-designed questionnaire should make it easy for respondents to give the necessary information and for the interviewer to record the answer, and it should be arranged so that sound analysis and interpretation are possible.

16. 4. It would keep the interview brief and to the point and be so arranged that the respondent(s) remain interested throughout the interview. 17. Each of these points will be further discussed throughout the following sections. Figure 4.1 shows how questionnaire design fits into the overall process of research design that was described in chapter 1 of this textbook. It emphasises that writing of the questionnaire proper should not begin before an exploratory research phase has been completed. 18. Figure 4.1 The steps preceding questionnaire design 19. Even after the exploratory phase, two key steps remain to be completed before the task of designing the questionnaire should commence. The first of these is to articulate the questions that research is intended to address. The second step is to determine the hypotheses around which the questionnaire is to be designed. 20. It is possible for the piloting exercise to be used to make necessary adjustments to administrative aspects of the study. This would include, for example, an assessment of the length of time an interview actually takes, in comparison to the planned length of the interview; or, in the same way, the time needed to complete questionnaires. Moreover, checks can be made on the appropriateness of the timing of the study in relation to contemporary events such as avoiding farm visits during busy harvesting periods.

21.

Preliminary decisions in questionnaire design

22. There are nine steps involved in the development of a questionnaire: 23. 1. Decide the information required. 2. Define the target respondents. 3. Choose the method(s) of reaching your target respondents. 4. Decide on question content. 5. Develop the question wording. 6. Put questions into a meaningful order and format. 7. Check the length of the questionnaire. 8. Pre-test the questionnaire. 9. Develop the final survey form. 24. Deciding on the information required 25. It should be noted that one does not start by writing questions. The first step is to decide 'what are the things one needs to know from the respondent in order to meet the survey's objectives?' These, as has been indicated in the opening chapter of this textbook, should appear in the research brief and the research proposal. 26. One may already have an idea about the kind of information to be collected, but additional help can be obtained from secondary data, previous rapid rural appraisals and exploratory research. In respect of secondary data, the researcher should be aware of what work has been done on the same or similar problems in the past, what factors have not yet been examined, and how the present survey questionnaire can build on what has already been discovered. Further, a small number of preliminary informal interviews with

target respondents will give a glimpse of reality that may help clarify ideas about what information is required. 27. Define the target respondents 28. At the outset, the researcher must define the population about which he/she wishes to generalise from the sample data to be collected. For example, in marketing research, researchers often have to decide whether they should cover only existing users of the generic product type or whether to also include non-users. Secondly, researchers have to draw up a sampling frame. Thirdly, in designing the questionnaire we must take into account factors such as the age, education, etc. of the target respondents.

29.

Choose the method(s) of reaching target respondents

30. It may seem strange to be suggesting that the method of reaching the intended respondents should constitute part of the questionnaire design process. However, a moment's reflection is sufficient to conclude that the method of contact will influence not only the questions the researcher is able to ask but the phrasing of those questions. The main methods available in survey research are: 31. personal interviews group or focus interviews mailed questionnaires telephone interviews. 32. Within this region the first two mentioned are used much more extensively than the second pair. However, each has its advantages and disadvantages. A general rule is that the more sensitive or personal the information, the more personal the form of data collection should be.

33.

Decide on question content

34. Researchers must always be prepared to ask, "Is this question really needed?" The temptation to include questions without critically evaluating their contribution towards the achievement of the research objectives, as they are specified in the research proposal, is surprisingly strong. No question should be included unless the data it gives rise to is directly of use in testing one or more of the hypotheses established during the research design. 35. There are only two occasions when seemingly "redundant" questions might be included: 36. Opening questions that are easy to answer and which are not perceived as being "threatening", and/or are perceived as being interesting, can greatly assist in gaining the respondent's involvement in the survey and help to establish a rapport. 37. This, however, should not be an approach that should be overly used. It is almost always the case that questions which are of use in testing hypotheses can also serve the same functions. 38. "Dummy" questions can disguise the purpose of the survey and/or the sponsorship of a study. For example, if a manufacturer wanted to

find out whether its distributors were giving the consumers or endusers of its products a reasonable level of service, the researcher would want to disguise the fact that the distributors' service level was being investigated. If he/she did not, then rumours would abound that there was something wrong with the distributor.

39.

Develop the question wording

40. Survey questions can be classified into three forms, i.e. closed, open-ended and open response-option questions. So far only the first of these, i.e. closed questions has been discussed. This type of questioning has a number of important advantages; 41. It provides the respondent with an easy method of indicating his answer - he does not have to think about how to articulate his answer. 42. It 'prompts' the respondent so that the respondent has to rely less on memory in answering a question. 43. Responses can be easily classified, making analysis very straightforward. 44. It permits the respondent to specify the answer categories most suitable for their purposes. 45. Putting questions into a meaningful order and format 46. Opening questions: Opening questions should be easy to answer and not in any way threatening to THE respondents. The first question is crucial because it is the respondent's first exposure to the interview and sets the tone for the nature of the task to be performed. If they find the first question difficult to understand, or beyond their knowledge and experience, or embarrassing in some way, they are likely to break off immediately. If, on the other hand, they find the opening question easy and pleasant to answer, they are encouraged to continue. 47. Question flow: Questions should flow in some kind of psychological order, so that one leads easily and naturally to the next. Questions on one subject, or one particular aspect of a subject, should be grouped together. Respondents may feel it disconcerting to keep shifting from one topic to another, or to be asked to return to some subject they thought they gave their opinions about earlier. 48. Question variety:. Respondents become bored quickly and restless when asked similar questions for half an hour or so. It usually improves response, therefore, to vary the respondent's task from time to time. An open-ended question here and there (even if it is not analysed) may provide muchneeded relief from a long series of questions in which respondents have been forced to limit their replies to pre-coded categories. Questions involving showing cards/pictures to respondents can help vary the pace and increase interest.

49.

Closing questions

50. It is natural for a respondent to become increasingly indifferent to the questionnaire as it nears the end. Because of impatience or fatigue, he may

give careless answers to the later questions. Those questions, therefore, that are of special importance should, if possible, be included in the earlier part of the questionnaire. Potentially sensitive questions should be left to the end, to avoid respondents cutting off the interview before important information is collected. 51. In developing the questionnaire the researcher should pay particular attention to the presentation and layout of the interview form itself. The interviewer's task needs to be made as straight-forward as possible. 52. Questions should be clearly worded and response options clearly identified. 53. Prescribed definitions and explanations should be provided. This ensures that the questions are handled consistently by all interviewers and that during the interview process the interviewer can answer/clarify respondents' queries. 54. Ample writing space should be allowed to record open-ended answers, and to cater for differences in handwriting between interviewers.

55.

Physical appearance of the questionnaire

56. The physical appearance of a questionnaire can have a significant effect upon both the quantity and quality of marketing data obtained. The quantity of data is a function of the response rate. Ill-designed questionnaires can give an impression of complexity, medium and too big a time commitment. Data quality can also be affected by the physical appearance of the questionnaire with unnecessarily confusing layouts making it more difficult for interviewers, or respondents in the case of self-completion questionnaires, to complete this task accurately. Attention to just a few basic details can have a disproportionately advantageous impact on the data obtained through a questionnaire.

6. Illustrate how will u execute a survey for studying the influence of internet among youth. Not done. 7. Explain the advantages and disadvantages of any two statistical tools used in communication research. 8. Elaborate the process of writing a content analysis research report.

Why do content analysis?

If youre also doing audience research, the main reason for also doing content analysis is to be able to make links between causes (e.g. program content) and effect (e.g. audience size). If you do an audience survey, but you dont systematically relate the survey findings to your program output, you wont know why your audience might have increased or decreased. You might guess, when the survey results first appear, but a thorough content analysis is much better than a guess. For a media organization, the main purpose of content analysis is to evaluate and improve its programming. All media organizations are trying to achieve some purpose. For commercial media, the purpose is simple: to make money, and survive. For public and community-owned media, there are usually several purposes, sometimes conflicting - but each individual program tends to have one main purpose. As a simple commercial example, the purpose of an advertisement is to promote the use of the product it is advertising: first by increasing awareness, then by increasing sales. The purpose of a documentary on AIDS in southern Africa might be to increase awareness of ways of preventing AIDS, and in the end to reduce the level of AIDS. Often, as this example has shown, there is not a single purpose, but a chain of them, with each step leading to the next. Using audience research to evaluate the effects (or outcome) of a media project is the second half of the process. The first half is to measure the causes (or inputs) and that is done by content analysis. For example, in the 1970s a lot of research was done on the effects of broadcasting violence on TV. If people saw crimes committed on TV, did that make them more likely to commit crimes? In this case, the effects were crime rates, often measured from police statistics. The problem was to link the effects to the possible causes. The question was not simply "does seeing crime on TV make people commit crimes?" but "What types of crime on TV (if any) make what types of people (if any) commit crimes, in what situations?" UNESCO in the 1970s produced a report summarizing about 3,000 separate studies of this issue - and most of those studies used some form of content analysis. When you study causes and effects, as in the above example, you can see how content analysis differs from audience research:

content analysis uncovers causes audience research uncovers effects.

The entire process - linking causes to effects, is known as evaluation.

Selecting content for analysis


Content is huge: the world contains a near-infinite amount of content. Its rare that an area of interest has so little content that you can analyse it all. Even when you do analyse the whole of something (e.g. all the pictures in one issue of a magazine) you will usually want to generalize those findings to a broader context (such as all the issues of that magazine). In other words, you are hoping that the issue you selected is a representative sample. Like audience research, content analysis involves sampling, as explained in chapter 2. But with content analysis, youre sampling content, not people. The body of information you draw the sample from is often called a corpus Latin for body.
Deciding sample size

Unless you want to look at very fine distinctions, you dont need a huge sample. The same principles apply for content analysis as for surveys: most of the time, a sample between 100 and 2000 items is enough - as long as it is fully representative. For radio and TV, the easiest way to sample is by time. With print media, the same principles apply, but it doesnt make sense to base the sample on time of day. Instead, use page and column numbers. Actually, its a lot easier with print media, because you dont need to organize somebody (or program a computer) to record the on-air program at regular intervals.

When you set out to do content analysis, the first thing to acknowledge is that its impossible to be comprehensive. No matter how hard you try, you cant analyse content in all possible ways. Ill demonstrate, with an example. Lets say that you manage a radio station. Its on air for 18 hours a day, and no one person seems to know exactly what is broadcast on each program. So you decide that during April all programs will be taped. Then you will listen to the tapes and do a content analysis. First problem: 18 hours a day, for 30 days, is 540 hours. If you work a 40-hour week, it will take almost 14 weeks to play the tapes back. But thats only listening - without pausing for content analysis! So instead, you get the tapes transcribed.

Most people speak about 8,000 words per hour. Thus your transcript has up to 4 million words about 40 books the size of this one. Now the content analysis can begin! You make a detailed analysis: hundreds of pages of tables and summaries. When youve finished (a year later?) somebody asks you a simple question, such as "What percentage of the time are womens voices heard on this station?" If you havent anticipated that question, youll have to go back to the transcript and laboriously calculate the answer. You find that the sex of the speaker hasnt always been recorded. You make an estimate (only a few days work, if youre lucky) then youre asked a follow-up question, such as "How much of that time is speech, and how much is singing?" Oops! The transcriber didnt bother to include the lyrics of the songs broadcast. Now youll have to go back and listen to all those tapes again! This example shows the importance of knowing what youre looking for when you do content analysis. Forget about trying to cover everything, because (a) theres too much content around, and (b) it can be analysed in an infinite number of ways. Without having a clear focus, you can waste a lot of time analysing unimportant aspects of content. The focus needs to be clearly defined before you begin work. An example of a focus is: "Well do a content analysis of a sample of programs (including networked programs, and songs) broadcast on Radio Lukole in April 2003, with a focus on describing conflict and the way it is managed."

3. Units of content
To be able to count content, your corpus needs to be divided into a number of units, roughly similar in size. Theres no limit to the number of units in a corpus, but in general the larger the unit, the fewer units you need. If the units you are counting vary greatly in length, and if you are looking for the presence of some theme, a long unit will have a greater chance of including that theme than will a short unit. If the longest units are many times the size of the shortest, you may need to change the unit - perhaps "per thousand words" instead of "per web page." If the interviews vary greatly in length, a time-based unit may be more appropriate than "per interview."

Units of media content

Depending on the size of your basic unit, youll need to take a different approach to coding. The main options are (from shortest to longest):

A word or phrase. If you are studying the use of language, words are an appropriate unit (perhaps can also group synonyms together, and include phrases). Though a corpus may have thousands of words, software can count them automatically. A paragraph, statement, or conversational turn: up to a few hundred words. An article. This might be anything from a short newspaper item to a magazine article or web page: usually between a few hundred and a few thousand words. A large document. This can be a book, an episode of a TV program, or a transcript of a long radio talk.

The longer the unit, the more difficult and subjective is the work of coding it as a whole. Consider breaking a document into smaller units, and coding each small unit separately. However, if its necessary to be able to link different parts of the document together, this wont make sense.
Units of audience content

When you are analysing audience content (not media content) the unit will normally be based on the data collection format and/or the software used to store the responses. The types of audience content most commonly produced from research data are

Open-ended responses to a question in a survey (usually all on one large file). Statements produced by consensus groups (often on one small file). Comments from in-depth interviews or group discussions. (Usually a large text file from each interview or group.)

In any of these cases, the unit can be either a person or a comment. Survey analysis is always based on individuals, but content analysis is usually based on comments. Most of the time this difference doesnt affect the findings, but if some people make far more comments than others, and these two groups give different kinds of comments, it will be best to use individuals as the unit.

Large units are harder to analyse

Usually the corpus is a set of the basic units: for example, a set of 13 episodes in a TV series, an 85-message discussion on an email listserv over several months, 500 respondents answers to a survey question - and so on. What varies is (a) the number of units in the corpus, and (b) the size of the units. Differences in these figures will require different approaches to content analysis. If you are studying the use of language, focusing on the usage of new words, you will need to use a large corpus - a million words or so - but the size of the unit you are studying is tiny: just a single word. The word frequencies can easily be compared using software such as Wordsmith. At the other extreme, a literary scholar might be studying the influence of one writer on another. The unit might be a whole play, but the number of units might be quite small - perhaps the 38 plays of Shakespeare compared with the 7 plays of Marlowe. If the unit is a whole play, and the focus is the literary style, a lot of human judgement will be needed. Though the total size of the corpus could be much the same as with the previous example, far more work is needed when the content unit is large - because detailed judgements will have to be made to summarize each play.
Dealing with several units at once

Often, some units overlap other units. For example, if you ask viewers of a TV program what they like most about it, some will give one response, and others may give a dozen. Is your unit the person or the response? (Our experience: its best to keep track of both types of unit, because you wont know till later whether using one type of unit will produce a different pattern of responses.)

Das könnte Ihnen auch gefallen