Beruflich Dokumente
Kultur Dokumente
Qno1). Solution:
What is a good business research?
Good research is carefully planned and conducted; resulting in dependable data that manager can use to reduce the decision-making risks. It follows the standards of scientific method and is systematic, clearly defined and planned and is based on empirical procedures. Following are the attributes of a GOOD BUSINESS RESEARCH: 1. The PURPOSE is clearly defined. The problem involved should be clearly stated, indicating the scope, limitations and the actual meaning of the terms involved should be explained. 2. The research PROCESS should be explained in details. Significant procedural details should be described to permit another researcher to repeat the research. Each step, such as acquiring participants, sampling methods, and gathering procedures should be revealed. If this information is omitted it is difficult to estimate The reliability and validity of the data and the research overall. 3. Research DESIGN should be thoroughly planned. The procedural design should be planned to yield as objective results as possible. If possible methods like opinion survey should be substituted with more reliable methods like direct observation and getting data from documented sources. All possible efforts should be made to minimize the influence of personal bias while working with data collection and recording. 4. LIMITATIONS of the research should be revealed. The researcher should report the flaws in procedures and methods to be used and how it might affect the data and findings. It is important as some of the imperfections in the research design and conduct may invalidate the results completely. 5. The data should be ABEQUATELY analyzed. The data should be classified and analyzed in a way that will help the researcher to come to valuable conclusions. 6. FINDINGS should be presented unambiguously. The report should include the clearly stated findings and should be reported with maximum objectivity. Presentation of the data should be clear, precise to be reasonably interpreted and easily understood by the decision maker and organized in a manner so that manager is able to locate crucial data.
Page |2 7. All CONCLUSIONS must be justified! Only those conclusions should be included into the report for which the data provides solid basis. Researcher should omit the mistake of broaden the basis with the help of his personal Experience and should try not to draw universal conclusions form the study, which is uses limited population sample.
Page |3
Page |4 19. Security guards and gaming surveillance officers 20. Top executives
2a) Solution>
The business research process entails learning everything possible about company customers, competitors and the industry. The major objectives of the process are determining what products or services to offer, which customers are most likely to buy them, where to sell them and how to price and promote them. Various steps in the business research process help a company achieve these objectives. Until the sixteenth century, human inquiry was primarily based on introspection. The way to know things was to turn inward and use logic to seek the truth. This paradigm had endured for a millennium and was a well-established conceptual framework for understanding the world. The seeker of knowledge was an integral part of the inquiry process.
Identifying Competitors
The first step is identifying key competitors in the industry. One way to garner information on the competition is through secondary research. Secondary research information is data that are already available about the industry: market share and total market sales. Secondary research may also provide detailed information about competitors, such as number of employees, products they sell and their key strengths. Secondary research can be obtained through various sources, depending on the industry. For example, the NPD Group uses their CREST analysis for restaurants. Nielsen provides data about consumer package goods.
Studying Customers
The process continues with a study of the consumer or business customer. It is important to determine what the customer wants and needs before developing products to meet those needs. The consumer will usually dictate which products will sell. If consumers needs are not met, they will usually buy competitive products. The best way to determine customer needs is through primary research. Primary research includes phone surveys, personal interviews and even mail surveys. With these surveys, marketing research professionals will often test certain product concepts, measure customer satisfaction and determine the best features and prices for their products.
SWOT Analysis
Once detailed information on customers and the competition has been garnered, a SWOT analysis can be used to study the company strengths, weaknesses, opportunities and threats. Strength may be the company market share or a good reputation among customers, according to SWOT Analysis at a popular business reference site. A weakness may be inexperienced management. Additionally, a company may
Page |5 have an opportunity to purchase another company. Threats may include new government regulation in the industry or a well-financed new competitor. A company uses the SWOT analysis to exploit its strengths via available opportunities. For example, a company with strong financial backing could purchase another company to increase its distribution and market share. A business can also minimize its weaknesses against potential threats, for example by hiring more experienced marketing people to deal with an increase in competition.
Page |6 usually begins by writing down the broad and general goals of the study. As the process continues, the goals become more clearly defined and the research issues are narrowed. Exploratory research (e.g., literature reviews, talking to people, and focus groups) goes hand-in-hand with the goal clarification process. The literature review is especially important because it obviates the need to reinvent the wheel for every new research question. More importantly, it gives researchers the opportunity to build on each others work. The research question itself can be stated as a hypothesis. A hypothesis is simply the investigator's belief about a problem. Typically, a researcher formulates an opinion during the literature review process. The process of reviewing other scholar's work often clarifies the theoretical issues associated with the research question. It also can help to elucidate the significance of the issues to the research community. The hypothesis is converted into a null hypothesis in order to make it testable. "The only way to test a hypothesis is to eliminate alternatives of the hypothesis." (Anderson, 1966, p.9) Statistical techniques will enable us to reject a null hypothesis, but they do not provide us with a way to accept a hypothesis. Therefore, all hypothesis testing is indirect.
Page |7 There are three basic methods of research: 1) survey, 2) observation, and 3) experiment (McDaniel and Gates, 1991). Each method has its advantages and disadvantages. The survey is the most common method of gathering information in the social sciences. It can be a faceto-face interview, telephone, or mail survey. A personal interview is one of the best methods obtaining personal, detailed, or in-depth information. It usually involves a lengthy questionnaire that the interviewer fills out while asking questions. It allows for extensive probing by the interviewer and gives respondents the ability to elaborate their answers. Telephone interviews are similar to face-to-face interviews. They are more efficient in terms of time and cost, however, they are limited in the amount of in-depth probing that can be accomplished, and the amount of time that can be allocated to the interview. A mail survey is generally the most cost effective interview method. The researcher can obtain opinions, but trying to meaningfully probe opinions is very difficult. Observation research monitors respondents' actions without directly interacting with them. It has been used for many years by A.C. Nielsen to monitor television viewing habits. Psychologists often use oneway mirrors to study behavior. Social scientists often study societal and group behaviors by simply observing them. The fastest growing form of observation research has been made possible by the bar code scanners at cash registers, where purchasing habits of consumers can now be automatically monitored and summarized. In an experiment, the investigator changes one or more variables over the course of the research. When all other variables are held constant (except the one being manipulated), changes in the dependent variable can be explained by the change in the independent variable. It is usually very difficult to control all the variables in the environment. Therefore, experiments are generally restricted to laboratory models where the investigator has more control over all the variables.
Sampling
It is incumbent on the researcher to clearly define the target population. There are no strict rules to follow, and the researcher must rely on logic and judgment. The population is defined in keeping with the objectives of the study. Sometimes, the entire population will be sufficiently small, and the researcher can include the entire population in the study. This type of research is called a census study because data is gathered on every member of the population. Usually, the population is too large for the researcher to attempt to survey all of its members. A small, but carefully chosen sample can be used to represent the population. The sample reflects the characteristics of the population from which it is drawn.
Page |8 Sampling methods are classified as either probability or nonprobability. In probability samples, each member of the population has a known probability of being selected. Probability methods include random sampling, systematic sampling, and stratified sampling. In nonprobability sampling, members are selected from the population in some nonrandom manner. These include convenience sampling, judgment sampling, quota sampling, and snowball sampling. The other common form of nonprobability sampling occurs by accident when the researcher inadvertently introduces no randomness into the sample selection process. The advantage of probability sampling is that sampling error can be calculated. Sampling error is the degree to which a sample might differ from the population. When inferring to the population, results are reported plus or minus the sampling error. In nonprobability sampling, the degree to which the sample differs from the population remains unknown. (McDaniel and Gates, 1991) Random sampling is the purest form of probability sampling. Each member of the population has an equal chance of being selected. When there are very large populations, it is often difficult or impossible to identify every member of the population, so the pool of available subjects becomes biased. Random sampling is frequently used to select a specified number of records from a computer file. Systematic sampling is often used instead of random sampling. It is also called an Nth name selection technique. After the required sample size has been calculated, every Nth record is selected from a list of population members. As long as the list does not contain any hidden order, this sampling method is as good as the random sampling method. Its only advantage over the random sampling technique is simplicity. Stratified sampling is commonly used probability method that is superior to random sampling because it reduces sampling error. A stratum is a subset of the population that share at least one common characteristic. The researcher first identifies the relevant stratums and their actual representation in the population. Random sampling is then used to select subjects for each stratum until the number of subjects in that stratum is proportional to its frequency in the population. Convenience sampling is used in exploratory research where the researcher is interested in getting an inexpensive approximation of the truth. As the name implies, the sample is selected because they are convenient. This nonprobability method is often used during preliminary research efforts to get a gross estimate of the results, without incurring the cost or time required to select a random sample. Judgment sampling is a common nonprobability method. The researcher selects the sample based on judgment. This is usually and extension of convenience sampling. For example, a researcher may decide to draw the entire sample from one "representative" city, even though the population includes all cities. When using this method, the researcher must be confident that the chosen sample is truly representative of the entire population. Quota sampling is the nonprobability equivalent of stratified sampling. Like stratified sampling, the researcher first identifies the stratums and their proportions as they are represented in the population.
Page |9 Then convenience or judgment sampling is used to select the required number of subjects from each stratum. This differs from stratified sampling, where the stratums are filled by random sampling. Snowball sampling is a special nonprobability method used when the desired sample characteristic is rare. It may be extremely difficult or cost prohibitive to locate respondents in these situations. Snowball sampling relies on referrals from initial subjects to generate additional subjects. While this technique can dramatically lower search costs, it comes at the expense of introducing bias because the technique itself reduces the likelihood that the sample will represent a good cross section from the population.
Data Collection
There are very few hard and fast rules to define the task of data collection. Each research project uses a data collection technique appropriate to the particular research methodology. The two primary goals for both quantitative and qualitative studies are to maximize response and maximize accuracy. When using an outside data collection service, researchers often validate the data collection process by contacting a percentage of the respondents to verify that they were actually interviewed. Data editing and cleaning involves the process of checking for inadvertent errors in the data. This usually entails using a computer to check for out-of-bounds data. Quantitative studies employ deductive logic, where the researcher starts with a hypothesis, and then collects data to confirm or refute the hypothesis. Qualitative studies use inductive logic, where the researcher first designs a study and then develops a hypothesis or theory to explain the results of the analysis. Quantitative analysis is generally fast and inexpensive. A wide assortment of statistical techniques is available to the researcher. Computer software is readily available to provide both basic and advanced multivariate analysis. The researcher simply follows the preplanned analysis process, without making subjective decisions about the data. For this reason, quantitative studies are usually easier to execute than qualitative studies. Qualitative studies nearly always involve in-person interviews, and are therefore very labor intensive and costly. They rely heavily on a researcher's ability to exclude personal biases. The interpretation of qualitative data is often highly subjective, and different researchers can reach different conclusions from the same data. However, the goal of qualitative research is to develop a hypothesis--not to test one. Qualitative studies have merit in that they provide broad, general theories that can be examined in future research.
P a g e | 10
Data Analysis
Modern computer software has made the analysis of quantitative data a very easy task. It is no longer incumbent on the researcher to know the formulas needed to calculate the desired statistics. However, this does not obviate the need for the researcher to understand the theoretical and conceptual foundations of the statistical techniques. Each statistical technique has its own assumptions and limitations. Considering the ease in which computers can calculate complex statistical problems, the danger is that the researcher might be unaware of the assumptions and limitations in the use and interpretation of a statistic.
P a g e | 11 Construct validity refers to the theoretical foundations underlying a particular scale or measurement. It looks at the underlying theories or constructs that explain a phenomena. This is also quite subjective and depends heavily on the understanding, opinions, and biases of the researcher. Reliability is synonymous with repeatability. A measurement that yields consistent results over time is said to be reliable. When a measurement is prone to random error, it lacks reliability. The reliability of an instrument places an upper limit on its validity (Spector, 1981). A measurement that lacks reliability will necessarily be invalid. There are three basic methods to test reliability: test-retest, equivalent form, and internal consistency. A test-retest measure of reliability can be obtained by administering the same instrument to the same group of people at two different points in time. The degree to which both administrations are in agreement is a measure of the reliability of the instrument. This technique for assessing reliability suffers two possible drawbacks. First, a person may have changed between the first and second measurement. Second, the initial administration of an instrument might in itself induce a person to answer differently on the second administration. The second method of determining reliability is called the equivalent-form technique. The researcher creates two different instruments designed to measure identical constructs. The degree of correlation between the instruments is a measure of equivalent-form reliability. The difficulty in using this method is that it may be very difficult (and/or prohibitively expensive) to create a totally equivalent instrument. The most popular methods of estimating reliability use measures of internal consistency. When an instrument includes a series of questions designed to examine the same construct, the questions can be arbitrarily split into two groups. The correlation between the two subsets of questions is called the splithalf reliability. The problem is that this measure of reliability changes depending on how the questions are split. A better statistic, known as Chronbach's alpha (1951), is based on the mean (absolute value) interim correlation for all possible variable pairs. It provides a conservative estimate of reliability, and generally represents "the lower bound to the reliability of an unweight scale of items" (Carmines and Zeller, p. 45). For dichotomous nominal data, the KR-20 (Kuder-Richardson, 1937) is used instead of Chronbach's alpha (McDaniel and Gates, 1991).
P a g e | 12 can usually be traced to a fault in the sampling procedure or in the design of a questionnaire. Random error does not occur in any consistent pattern, and it is not controllable by the researcher.
Summary
Scientific research involves the formulation and testing of one or more hypotheses. A hypothesis cannot be proved directly, so a null hypothesis is established to give the researcher an indirect method of testing a theory. Sampling is necessary when the population is too large, or when the researcher is unable to investigate all members of the target group. Random and systematic sampling are the best methods because they guarantee that each member of the population will have an known non-zero chance of being selected. The mathematical reliability (repeatability) of a measurement, or group of measurements, can be calculated, however, validity can only be implied by the data, and it is not directly verifiable. Social science research is generally an attempt to explain or understand the variability in a group of people.
References:
Anderson, B. (1966) the Psychology Experiment: An Introduction to the Scientific Method. Belmont, CA: Wadsworth. McDaniel, C. and R. Gates (1991) Contemporary Marketing Research. St. Paul, MN: West. Carmines, E., and R. Zeller, (1979) Reliability and Validity Assessment. Beverly Hills: Sage. Spector, P. (1981) Research Design. Beverly Hills: Sage. Willowick, D. (1993) Stat Pac Gold IV: Marketing Research and Survey Edition. Minneapolis, MN: Stat Pac, Ink
Qno2b). Solution:
What is a research question?
A research question is a clear, focused, concise, complex and arguable question around which you center your research. You should ask a question about an issue that you are genuinely curious about.
Research Question:
A research question is the methodological point of departure of scholarly research in both the natural and social sciences. The research will answer any question posed. At an undergraduate level, the answer to the research question is the thesis statement.
P a g e | 13
IMPORTANCE
The research question is one of the first methodological steps the investigator has to take when undertaking research. The research question must be accurately and clearly defined. Choosing a research question is the central element of both quantitative and qualitative research and in some cases it may precede construction of the conceptual framework of study. In all cases, it makes the theoretical assumptions in the framework more explicit, most of all it indicates what the researcher wants to know most and first.
USES
The student or researcher then carries out the research necessary to answer the research question, whether this involves reading secondary sources over a few days for an undergraduate term paper or carrying out primary research over years for a major project. Once the research is complete and the researcher knows the (probable) answer to the research question, writing can begin. In term papers, the answer to the question is normally given in summary in the introduction in the form of a thesis statement.
P a g e | 14
Quantitative Study: A Quantitative study seeks to learn what, where, or when, so the writers research must be directed at determining the what, where, or when of the research topic. Therefore, when crafting a Research Question for a Quantitative study, the writer will need to ask a what, where, or when question about the topic. For example: Where should the company market its new product? Unlike a Qualitative study, a Quantitative study is mathematical analysis of the research topic, so the writers research will consist of numbers and statistics. Here is Creswell's (2009) example of a script for a quantitative research question: Does _________ (name the theory) explain the relationship between _________ (independent variable) and _________ (dependent variable), controlling for the effects of _________ (control variable)? Alternatively, a script for a quantitative null hypothesis might be as follows: There is no significant difference between _________ (the control and experimental groups on the independent variable) on _________ (dependent variable). Quantitative Studies also fall into two categories: (a) Correlational Studies and (b) Experimental Studies: A Quantitative-Correlational study is non-experimental, requiring the writer to research relationships without manipulating or randomly selecting the subjects of the research. The Research Question for a Quantitative-Correlational study may look like this: What is the relationship between long distance commuters and eating disorders? A QuantitativeExperimental study is experimental in that it requires the writer to manipulate and randomly select the subjects of the research. The Research Question for a Quantitative-Experimental study may look like this: Does the consumption of fast food lead to eating disorders? Mixed Study: A Mixed study integrates both Qualitative and Quantitative studies, so the writers research must be directed at determining the why or how and the what, where, or when of the research topic. Therefore, the writer will need to craft a Research Question for each study required for the assignment. Note: A typical study may be expected to have between 1 to 6 Research Questions. Once the writer has determined the type of study to be used and the specific objectives the paper will address, the writer must also consider whether the Research Question passes the so what test. The so what test means that the writer must construct evidence to convince the audience why the research is expected to add new or useful knowledge to the literature.
P a g e | 15
Is your research question focused? Research questions must be specific enough to be well covered in the space available. (See flip side for examples of focused vs. unfocused research questions.) Is your research question complex? Research questions should not be answerable with a simple yes or no or by easily-found facts. They should, instead, require both research and analysis on the part of the writer. Hypothesize. After youve come up with a question, think about what the path you think the answer will take. Where do you think your research will take you? What kind of argument are you hoping to make/support? What will it mean if your research disputes your planned argument?
Sample Research Questions Unclear: Why are social networking sites harmful?
P a g e | 16
Clear: How are online users experiencing or addressing privacy issues on such social networking sites as MySpace and Facebook? The unclear version of this question doesnt specify which social networking sites or suggest what kind of harm the sites are causing. It also assumes that this harm is proven and/or accepted. The clearer version specifies sites (MySpace and Facebook), the type of harm (privacy issues), and who the issue is harming (users). A strong research question should never leave room for ambiguity or interpretation. Unfocused: What is the effect on the environment from global warming? Focused: How are glacial melting affecting penguins in Antarctica? The unfocused research question is so broad that it couldnt be adequately answered in a book-length piece, let alone a standard college-level paper. The focused version narrows down to a specific cause (glacial melting), a specific place (Antarctica), and a specific group that is affected (penguins). When in doubt, make a research question as narrow and focused as possible. Too simple: How are doctors addressing diabetes in the U.S.? Appropriately Complex: What are common traits of those suffering from diabetes in America, and how can these commonalities be used to aid the medical community in prevention of the disease? The simple version of this question can be looked up online and answered in a few factual sentences; it leaves no room for analysis. The more complex version is written in two parts; it is thought provoking and requires both significant investigation and evaluation from the writer. As a general rule of thumb, if a quick Google search can answer a research question, its likely not very effective.
P a g e | 17
Simulation Data collection, whether it should be structured or unstructured The sample size should be large or small Quantitative or qualitative research.
Exploratory Study:
Loose structure Expand understanding Provide insight Develop hypotheses
Formal Study:
Precise procedures Begins with hypotheses Answers research questions
P a g e | 18
Experience Surveys:
What is being done? What has been tried in the past with or without success? How have things changed? Who is involved in the decisions? What problem areas can be seen? Whom can we count on to assist or participate in the research?
Focus Groups:
Group discussion 6-10 participants Moderator-led 90 minutes-2 hours
P a g e | 19
P a g e | 20
Descriptive Studies
Experimental Effects
Ex Post Facto Study
After-the-fact report on what happened to the measured variable
Experiment
Study involving the manipulation or control of one or more variables to determine the effect on another variable
P a g e | 21
Causation:
The basic element in causal research is that A produces B attain A forces B to happen
P a g e | 22
P a g e | 23
Sample
P a g e | 24
given quota. Data collection Data analysis Unstructured or semi-structured techniques e.g. individual depth interviews or group discussions. Non-statistical.
respondents. Structured techniques such as online questionnaires, on-street or telephone interviews. Statistical data is usually in the form of tabulations (tabs). Findings are conclusive and usually descriptive in nature. Used to recommend a final course of action.
Outcome
Exploratory and/or investigative. Findings are not conclusive and cannot be used to make generalizations about the population of interest. Develop an initial understanding and sound base for further decision making.
P a g e | 25
whole cm, and you do not report it on a continuous scale even though height (length) can be measured to great accuracy on a continuous scale.
P a g e | 26
P a g e | 27
Cluster sampling
The main difference between stratified and cluster sampling is that in stratified sampling all the strata need to be sampled. In cluster sampling one proceeds by first selecting a number of clusters at random and then sampling each cluster or conduct a census of each cluster. But usually not all clusters would be included.
P a g e | 28
proportion of the variance in the measurement scores that is due to differences in the true scores rather than due to random error. Please note that I have ignored systematic (nonrandom) error, optimistically assuming that it is zero or at least small. Systematic error arises when our instrument consistently measures something other than what it was designed to measure. For example, a test of political conservatism might mistakenly also measure personal stinginess. Also note that I can never know what the reliability of an instrument (a test) is, because I cannot know what the true scores are. I can, however, estimate reliability.
Validity
Simply put, the construct validity of an operationalization (a measurement or a manipulation) is the extent to which it really measures (or manipulates) what it claims to measure (or manipulate). When the dimension being measured is an abstract construct that is inferred from directly observable events, then we may speak of construct validity.
P a g e | 29
Face Validity. An operationalization has face validity when others agree that it looks like it does measure or manipulate the construct of interest. For example, if I tell you that I am manipulating my subjects sexual arousal by having them drink a pint of isotonic saline solution, you would probably be skeptical. On the other hand, if I told you I was measuring my male subjects sexual arousal by measuring erection of their penises, you would probably think that measurement to have face validity. Content Validity. Assume that we can detail the entire population of behavior (or other things) that an operationalization is supposed to capture. Now consider our operationalization to be a sample taken from that population. Our operationalization will have content validity to the extent that the sample is representative of the population. To measure content validity we can do our best to describe the population of interest and then ask experts (people who should know about the construct of interest) to judge how well representative our sample is of that population. Criterion-Related Validity. Here we test the validity of our operationalization by seeing how it is related to other variables. Suppose that we have developed a test of statistics ability. We might employ the following types of criterion-related validity: Concurrent Validity. Are scores on our instrument strongly correlated with scores on other concurrent variables (variables that are measured at the same time). For our example, we should be able to show that students who just finished a stats course score higher than those who have never taken a stats course. Also, we should be able to show a strong correlation between score on our test and students current level of performance in a stats class. Predictive Validity. Can our instrument predict future performance on an activity that is related to the construct we are measuring? For our example, is there a strong correlation between scores on our test and subsequent performance of employees in an occupation that requires the use of statistics? Convergent Validity. Is our instrument well correlated with measures of other constructs to which it should, theoretically, be related? For our example, we might expect scores on our test to be well correlated with tests of logical thinking, abstract reasoning, verbal ability, and, to a lesser extent, mathematical ability. Discriminant Validity. Is our instrument not well correlated with measures of other constructs to which it should not be related? For example, we might expect scores on our test not to be well correlated with tests of political conservatism, ethical ideology, love of Italian food, and so on.
P a g e | 30
Item Analysis. If you believe your scale is one-dimensional, you will want to conduct an
item analysis. Such an analysis will estimate the reliability of your instrument by measuring the internal consistency of the items, the extent to which the items correlate well with one another. It will also help you identify troublesome items. To illustrate item analysis with SPSS, we shall conduct an item analysis on data from one of my past research projects. For each of 154 respondents we have scores on each of ten Liker items. The scale is intended to measure ethical idealism. People high on idealism believe that an action is unethical if it produces any bad consequences, regardless of how many good consequences it might also produce. People low on idealism believes that an action may be ethical if its good consequences outweigh its bad consequences. Bring the data (KJ-Idealism.sav) into SPSS.
P a g e | 31 Select all ten items and scoot them to the Items box on the right.
Check Scale if item deleted and then click Continue. Back on the initial window, click OK. Look at the output. The Cronbach alpha is .744, which is acceptable. Reliability Statis tics
Cronbac h's A lpha .744 N of Items 10
Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9 Q10
There are two items, numbers 7 and 10, which have rather low item-total correlations, and the alpha would go up if they were deleted, but not much, so I retained them. It is disturbing that item 7 did not perform better, since failure to do ethical cost/benefit analysis is an important part of the concept of ethical idealism. Perhaps the problem is that this item does not make it clear that we are talking about ethical cost/benefit analysis rather than other cost/benefit analysis. For example, a person might think it just fine to do a personal, financial cost/benefit analysis to decide whether to lease a car or buy a car, but immoral to weigh morally good consequences against morally bad consequences when deciding whether it is proper to keep horses for entertainment purposes (riding them). Somehow I need to find the time to do some more work on improving measurement of the ethical cost/benefit component of ethical idealism.
P a g e | 33
Insufficiency
The item is called insufficient if it is prepared that it lets overlooks of the details. The in sufficiency basically may become because of deficiency, having several meanings, and indefiniteness. A defected item is an item deprived of the knowledge it has to. An item with several meanings is an item having more than one meaning and without a limitation in its subject. Indefinite item is an item deprived of the measurement that let to a certain measurement
Misunderstanding
Misunderstanding basically sources from the linguistic properties the item has. There are several reasons that may cause misunderstanding of the item: (Sencer & Sencer 1978). i) The words that are out of knowledge and experiences of the respondents should not be used. ii) The words in the item should not include various meanings
P a g e | 34
Biasness:
Some items have a tendency to get some answers in one way as a result of their way of constructing. Those items, not giving the same chance to all answers, are called biased items. The item types causing biasness can be listed as follows.
a) Directing Items
The items, affecting the respondents and directing the answers to one way, construct that class.
b) Loaded Items
The items with some feeling or meaning in a defined subject and tending an approval and remembrance by itself or the items with sayings.
Objectivity in Scoring
The reliability of a scale is affected from the scorings being whether objective or the researchers being whether subjective. The consistency of the scores observed from the same or different subject in different times is called that scales scoring reliability. If a score obtained from a scale is not changing due to the person scoring the scale or the time of scoring that means the scales scoring reliability is high.
P a g e | 35