Sie sind auf Seite 1von 28

IELTS (pronounced /ˈaɪ.

ɛlts/), or 'International English Language Testing System', is an


international standardised test of English language proficiency. It is jointly managed by
University of Cambridge ESOL Examinations, the British Council and IDP Education
Australia, and was established in 1989.

There are two versions of the IELTS: the Academic Version and the General Training
Version:

• The Academic Version is intended for those who want to enroll in universities and
other institutions of higher education and for professionals such as medical doctors
and nurses who want to study or practice in an English-speaking country.
• The General Training Version is intended for those planning to undertake non-
academic training or to gain work experience, or for immigration purposes.

It is generally acknowledged that the reading and writing tests for the Academic Version are
more difficult than those for the General Training Version, due to the differences in the level
of intellectual and academic rigour between the two versions.

IELTS is accepted by most Australian, British, Canadian, Irish, New Zealand and South
African academic institutions, over 2,000 academic institutions in the United States, and
various professional organisations. It is also a requirement for immigration to Australia and
Canada.

An IELTS result or Test Report Form (TRF - see below) is valid for two years.

In 2007, IELTS tested over a million candidates in a single 12-month period for the first time
ever, making it the world's most popular English language test for higher education and
immigration.[1]

Contents
[hide]

• 1 IELTS characteristics
• 2 IELTS test structure
• 3 Band scale
o 3.1 9 Expert User
o 3.2 8 Very Good User
o 3.3 7 Good User
o 3.4 6 Competent User
o 3.5 5 Modest User
o 3.6 4 Limited User
o 3.7 3 Extremely Limited User
o 3.8 2 Intermittent User
o 3.9 1 Non User
o 3.10 0 Did not attempt the test
• 4 Conversion table
• 5 Locations and test dates
• 6 Global test scores
o 6.1 Countries with highest averages
o 6.2 Results by first language of candidate
• 7 IELTS level required by academic institutions for admission
o 7.1 United States
o 7.2 United Kingdom
o 7.3 Germany
o 7.4 Hong Kong
• 8 See also
• 9 References

• 10 External links

[edit] IELTS characteristics


The IELTS incorporates the following features:

• A variety of accents and writing styles presented in text materials in order to minimise
linguistic bias.
• IELTS tests the ability to listen, read, write and speak in English.
• Band scores used for each language sub-skill (Listening, Reading, Writing, and
Speaking). The Band Scale ranges from 0 ("Did not attempt the test") to 9 ("Expert
User").
• The speaking module - a key component of IELTS. This is conducted in the form of a
one-to-one interview with an examiner. The examiner assesses the candidate as he or
she is speaking, but the speaking session is also recorded for monitoring as well as re-
marking in case of an appeal against the banding given.
• IELTS is developed with input from item writers from around the world. Teams are
located in the USA, Great Britain, Australia, New Zealand, Canada and other English
speaking nations.

[edit] IELTS test structure


All candidates must complete four Modules - Listening, Reading, Writing and Speaking - to
obtain a Band, which is shown on an IELTS Test Report Form (TRF). All candidates take the
same Listening and Speaking Modules, while the Reading and Writing Modules differ
depending on whether the candidate is taking the Academic or General Training Versions
of the Test.

The total test duration is around 2 hours and 45 minutes for Listening, Reading and Writing
modules.

• Listening: 40 minutes, 30 minutes for which a recording is played centrally and


additional 10 minutes for transferring answers onto the OMR answer sheet.
• Reading: 60 minutes.
• Writing: 60 minutes.

(N.B.: No additional time is given for transfer of answers in Reading and Writing modules)
The first three modules - Listening, Reading and Writing (always in that order) - are
completed in one day, and in fact are taken with no break in between. The Speaking Module
may be taken, at the discretion of the test centre, in the period seven days before or after the
other Modules.

The tests are designed to cover the full range of ability from non-user to expert user.

[edit] Band scale


IELTS is scored on a nine-band scale, with each band corresponding to a specified
competence in English. Overall Band Scores are reported to the nearest whole or half band.

For the avoidance of doubt, the following rounding convention applies; if the average across
the four skills ends in .25, it is rounded up to the next half band, and if it ends in .75, it is
rounded up to the next whole band.

The nine bands are described as follows:

[edit] 9 Expert User

Has full command of the language: appropriate, accurate and fluent with complete
understanding. It is very hard to attain this score.

[edit] 8 Very Good User

Has fully operational command of the language with only occasional unsystematic
inaccuracies and inappropriacies. Handles complex detailed argumentation well.

[edit] 7 Good User

Has operational command of the language, though with occasional inaccuracies,


inappropriateness and misunderstandings in some situations. Generally handles complex
language well and understands detailed reasoning.

[edit] 6 Competent User

Has generally effective command of the language despite some inaccuracies, inappropriacies
and misunderstandings. Can use and understand fairly complex language, particularly in
familiar situations.

[edit] 5 Modest User

Has a partial command of the language, coping with overall meaning in most situations,
though is likely to make many mistakes. The candidate should be able to handle
communication in his or her own field.

[edit] 4 Limited User


Basic competence is limited to familiar situations. Has frequent problems in using complex
language.

[edit] 3 Extremely Limited User

Conveys and understands only general meaning in very familiar situations. Frequent
breakdowns in communication occur.

[edit] 2 Intermittent User

No real communication is possible except for the most basic information using isolated words
or short formulae in familiar situations and to meet immediate needs.

[edit] 1 Non User

Essentially has no ability to use the language beyond possibly a few isolated words.

[edit] 0 Did not attempt the test

No assessable information provided.

[edit] Conversion table


This table can be used for the Listening & Reading tests to convert raw scores to band scores.
This chart is a guide only, because sometimes the scores adjust slightly depending on how
difficult the exam is.

Band
9.0 8.5 8.0 7.5 7.0 6.5 6.0 5.5 5.0 4.5 4.0 3.5 3.0 2.5 2.0 1.5 1.0 0.0
Score
Raw 39 – 37 – 35 – 32 – 29 – 26 – 22 – 18 – 15 – 12 – 10 – 8 – 6 – 4 –
3 2 1 0
score 40 38 36 34 31 28 25 21 17 14 11 9 7 5

[edit] Locations and test dates


The test is taken every year in 500 locations across 121 countries, and is one of the fastest
growing English language tests in the world. The number of candidates has grown from about
80,000 in 1999 to over 1,000,000 in 2007.[1]

The top three locations in which candidates took the test in 2007 were:

• Academic Category
1. China
2. India
3. Pakistan

There are up to 48 test dates available per year. Each test centre offers tests up to four times a
month depending on local demand. There used to be a minimum time limit of 90 days before
which a person could not sit for the test again, but this restriction was recently lifted.
[edit] Global test scores
[edit] Countries with highest averages

In 2007, the countries with the highest average scores for the Academic Strand of the IELTS
test were:[1]

1. United Kingdom
2. United States
3. Germany
4. Russia
5. Spain

[edit] Results by first language of candidate

The top 5 language-speaking (or nationality) groups that achieved the best results in 2007 for
the Academic Strand of the IELTS test were: [1]

1. Tagalog
2. Spanish
3. Hindi
4. Malay
5. Tamil

[edit] IELTS level required by academic institutions for


admission
Just over half (51%) of candidates take the test to enter higher education in a foreign country.
[1]
The IELTS minimum scores required by academic institutions vary. As a general rule,
institutions from English-speaking countries require a higher IELTS band.

[edit] United States

The highest IELTS Band required by a university is 8.5[2], by the Graduate School of
Journalism at Columbia University; the only US institution to require this band.

It should be noted that while Ohio State University's Moritz College of Law is listed as
requiring an 8.5 on the IELTS website [1], the school lists an 8.0 [2].

For MIT the minimum score required is 7.

[edit] United Kingdom

The highest IELTS Band required is 8[2], by the Master of Science degree in Marketing at the
University of Warwick .

Most IELTS requirements by universities fall between 5.5 and 7.0. For example:
University Minimum IELTS score
Oxford University 7.0[3]
Cambridge University 7.0[4]
Glasgow University 6.5 (General)/ 7.0 (Faculty of Arts & Humanities)[5]
University College 6.5/7.0/7.5 (depends on UCL's individual faculty/department
London requirement)
6.5 (7.0 for the Life Sciences Department and the Imperial Business
Imperial College London
School)
Exeter University 6.5
Liverpool University 6.0[6]
Birmingham University 6.0
Essex University 5.5

[edit] Germany

Stuttgart University requires an IELTS minimum of 6.0.

[edit] Hong Kong

The Law Society of Hong Kong requires applicants to achieve a minimum score of 7.0 for
entry into the Postgraduate Certificate in Laws course, taught at University of Hong Kong,
Chinese University of Hong Kong and City University of Hong Kong.

Graduate Record Examination

From Wikipedia, the free encyclopedia

Jump to: navigation, search


Graduate Record Examination or GRE is a commercially run standardized test that is an
admission requirement for many graduate schools principally in the United States, but also in
other English speaking countries. Created and administered by Educational Testing Service
(or ETS) in 1949,[1] the exam is primarily focused on testing abstract thinking skills in the
areas of mathematics, vocabulary, and analytical writing. The GRE is typically a computer-
based exam that is administered by select qualified testing centers; however, paper-based
exams are offered in areas of the world that lack the technological requirements.

In the graduate school admissions process, the level of emphasis that is placed upon GRE
scores varies widely between schools and even departments within schools. The importance
of a GRE score can range from being an important selection factor to being a mere admission
formality.

Critics of the GRE have argued that the exam format is so rigid that it effectively tests only
how well a student can conform to a standardized test taking procedure.[2] ETS responded by
announcing plans in 2006 to radically redesign the test structure starting in the fall of 2007;
however, the company has since announced, "Plans for launching an entirely new test all at
once were dropped, and ETS decided to introduce new question types and improvements
gradually over time." The new questions have been gradually introduced since November
2007.[3]

In the United States and Canada, the cost of the general test is $190 as of September, 2009,
although ETS will reduce the fee under certain circumstances. They are promoting financial
aid to those GRE applicants who prove economic hardship [4]. ETS erases all test records that
are older than 5 years, although graduate program policies on the admittance of scores older
than 5 years will vary.

Contents
[hide]

• 1 Structure
o 1.1 Analytical writing section
 1.1.1 Issue task
 1.1.2 Argument task
o 1.2 Verbal section
o 1.3 Quantitative section
o 1.4 Experimental section
• 2 Scoring
o 2.1 Computerized adaptive testing
o 2.2 Scaled score percentiles
• 3 Use in admissions
• 4 GRE Subject Tests
• 5 GRE and GMAT
• 6 Preparation
• 7 Testing locations
• 8 Validity
• 9 Criticism
o 9.1 Bias
o 9.2 Weak predictor of graduate school performance
o 9.3 Historical susceptibility to cheating
• 10 Plans for the revised GRE
• 11 GRE prior to October 2002
• 12 References
• 13 See also

• 14 External links

[edit] Structure
This article may require cleanup to meet Wikipedia's quality
standards. Please improve this article if you can. (July 2008)

The exam consists of four sections. The first section is a writing section, while the other three
are multiple-choice style. One of the multiple choice style exams will test verbal skills,
another will test quantitative skills and a third exam will be an experimental section that is
not included in the reported score. Test takers do not know which of the three multiple-choice
sections is the experimental section. The entire test procedure takes about 4 hours.[5]

[edit] Analytical writing section

The analytical writing section consists of two different essays, an "issue task" and an
"argument task". The writing section is graded on a scale of 0-6, in half-point increments.
The essays are written on a computer using a word processing program specifically designed
by ETS. The program allows only basic computer functions and does not contain a spell-
checker or other advanced features. Each essay is scored by at least two readers on a six-point
holistic scale. If the two scores are within one point, the average of the scores is taken. If the
two scores differ by more than a point, a third reader examines the response.

[edit] Issue task

The test taker will be able to choose between two topics upon which to write an essay. The
time allowed for this essay is 45 minutes.[6]

[edit] Argument task

The test taker will be given an "argument" and will be asked to write an essay that critiques
the argument. Test takers are asked to consider the argument´s logic and to make suggestions
about how to improve the logic of the argument. The time allotted for this essay is 30
minutes.[7]

[edit] Verbal section

One graded multiple-choice section is always a verbal section, consisting of analogies,


antonyms, sentence completions, and reading comprehension passages. Multiple-choice
response sections are graded on a scale of 200-800, in 10-point increments. This section
primarily tests vocabulary, and average scores in this section are substantially lower than
those in the quantitative section.[8] In a typical examination, this section may consist of 30
questions, and 30 minutes may be allotted to complete the section.[9]

[edit] Quantitative section

The quantitative section, the other multiple-choice section, consists of problem solving and
quantitative comparison questions that test high-school level mathematics. Multiple-choice
response sections are graded on a scale of 200-800, in 10-point increments. In a typical
examination, this section may consist of 28 questions, and test takers may be given 45
minutes to complete the section.[10]

[edit] Experimental section

The experimental section, which can be either a verbal, quantitative, or analytical writing
task, contains new questions that ETS is considering for future use. Although the
experimental section does not count toward the test-taker's score, it is unidentified and
appears identical to the real (scored) part of the test. As test takers have no clear way of
knowing which section is experimental, they are forced to complete this section, or risk
seriously damaging their final scores.[11]

If the experimental section appears as an analytical writing question (essay), if an "issue"


type question is presented, a choice between two topics will not be given. This coupled with
the fact that the true analytical writing section is the first test given can help the test-taker to
deduce which is the experimental section and the taker can thus lower the importance of that
section.[citation needed]

[edit] Scoring
[edit] Computerized adaptive testing

The common (Verbal and Quantitative) multiple-choice portions of the exam currently use
computer-adaptive testing (CAT) methods that automatically change the difficulty of
questions as the test taker proceeds with the exam, depending on the number of correct or
incorrect answers that are given. The test taker is not allowed to go back and change the
answers to previous questions, and some type of answer must be given before the next
question is presented.

The first question that is given in a multiple-choice section is considered to be an "average


level" question that half of the GRE test takers will answer correctly. If the question is
answered correctly, then subsequent questions become more difficult. If the question is
answered incorrectly, then subsequent questions become easier, until a question is answered
correctly.[12] This approach to administration yields scores that are of similar accuracy while
using approximately half as many items.[13] However, this effect is moderated with the GRE
because it has a fixed length; true CATs are variable length, where the test will stop itself
once it has zeroed in on a candidate's ability level.

The actual scoring of the test is done with item response theory (IRT). While CAT is
associated with IRT, IRT is actually used to score non-CAT exams. The GRE subject tests,
which are administered in the traditional paper-and-pencil format, use the same IRT scoring
algorithm. The difference that CAT provides is that items are dynamically selected so that the
test taker only sees items of appropriate difficulty. Besides the psychometric benefits, this has
the added benefit of not wasting the examinee's time by administering items that are far too
hard or easy. This occurs in fixed-form testing.

An examinee can miss one or more questions on a multiple-choice section and still receive a
perfect score of 800. Likewise, even if no question is answered correctly, 200 is the lowest
score possible.[14]

[edit] Scaled score percentiles

The percentiles of the current test are as follows:[15]

Scaled Verbal Quantitative


score Reasoning % Reasoning %

800 99 94

780 99 90

760 99 86

740 99 82

720 98 77

700 97 72

680 96 68

660 94 63

640 91 58

620 89 53

600 85 49

580 81 44

560 76 40

540 71 35

520 65 31

500 60 28

480 55 24

460 49 21

440 43 18

420 37 15
400 31 13

380 26 11

360 21 9

340 15 7

320 10 5

300 6 4

280 3 3

260 1 2

240 1 1

220 0 1

200 0 0

mean 465 584

Analytical Writing
Writing score %

6 96

5.5 88

5 77

4.5 54

4 33

3.5 18

3 7

2.5 2

2 1

1.5 0

1 0

0.5 0

mean 4.1
Comparisons for "Intended Graduate Major" are "limited to those who earned their college
degrees up to two years prior to the test date." ETS provides no score data for "non-
traditional" students who have been out of school more than two years, although its own
report "RR-99-16" indicated that 22% of all test takers in 1996 were over the age of 30.

[edit] Use in admissions


Many graduate schools in English-speaking countries (especially in the United States) require
GRE results as part of the admissions process. The GRE is a standardized test intended to
measure the abilities of all graduates in tasks of general academic nature, regardless of their
fields of specialization. The GRE is supposed to measure the extent to which undergraduate
education has developed an individual's verbal and quantitative skills in abstract thinking.

Unlike other standardized admissions tests (such as the SAT, LSAT, and MCAT), the use and
weight of GRE scores vary considerably not only from school to school, but from department
to department, and from program to program too. Programs in liberal arts topics may only
consider the applicant's verbal score to be of interest, while mathematics and science
programs may only consider quantitative ability; however, since most applicants to
mathematics, science, or engineering graduate programs all have high quantitative scores, the
verbal score can become a deciding factor even in these programs. Admission to graduate
schools depends on a complex mix of several different factors. Schools see letters of
recommendation, statement of purpose, GPA, GRE score etc.[16] Some schools use the GRE
in admissions decisions, but not in funding decisions; others use the GRE for the selection of
scholarship and fellowship candidates, but not for admissions. In some cases, the GRE may
be a general requirement for graduate admissions imposed by the university, while particular
departments may not consider the scores at all. Graduate schools will typically provide
information about how the GRE is considered in admissions and funding decisions, and the
average scores of previously admitted students. The best way to find out how a particular
school or program evaluates a GRE score in the admissions process is to contact the person in
charge of graduate admissions for the specific program in question (and not the graduate
school in general).

Programs that involve significant expository writing require the submission of a prepared
writing sample that is considered more useful in determining writing ability than the
analytical writing section; however, the writing scores of foreign students are sometimes
given more scrutiny and are used as an indicator of overall comfort with and mastery of
conversational English.

[edit] GRE Subject Tests


In addition to the General Test, there are also eight GRE Subject Tests testing knowledge in
the specific areas of Biochemistry, Cell and Molecular Biology, Biology, Chemistry,
Computer Science, Literature in English, Mathematics, Physics, and Psychology. In the past,
subject tests were also offered in the areas of Economics, Revised Education, Engineering,
Geology, History, Music, Political Science, and Sociology. In April 1998, the Revised
Education and Political Science exams were discontinued. In April 2000, the History and
Sociology exams were discontinued, and the other four were discontinued in April 2001.[4]
[edit] GRE and GMAT
GMAT (The Graduate Management Admission Test) is a computer adaptive standardized test
in mathematics and the English language for measuring aptitude to succeed academically in
graduate business studies. Business schools commonly use the test as one of many selection
criteria for admission into an MBA program. However, there are many business schools that
also accept GRE scores.

The following are criteria of certain business schools:

• U Penn-Wharton School: Official test scores for the GMAT or GRE tests.
• Stanford: Finance - The GRE is preferred, although the GMAT will be
accepted.
• NYU-Stern School: The GMAT is strongly prefered, but scores from the
Graduate Record Examination (GRE) will also be accepted.
• U Chicago: For Economics - the GRE is required. For Finance - the GRE is
preferred; GMAT is acceptable. For all other areas - the GRE or the GMAT
are accepted.
• Berkeley-Haas: Without exception, all applicants to the Haas Ph.D.
Program must submit official scores of either the Graduate Management
Admissions Test (GMAT) or the Graduate Examination.

In comparison with GMAT's emphasis on logic, GRE measures the test-takers' ability more
in vocabulary. This difference is reflected in the structure of each test. Despite the Analytical
Writing section in common, GRE has analogies, antonyms, sentence completions, and
reading comprehension passages in Verbal section, while GMAT has sentence correction,
critical reasoning and reading comprehension.

Also, higher mathematical ability is required in GMAT to get a good score. The GRE is more
appealing to international MBA students and applicants from a non-traditional background.[17]

[edit] Preparation
A variety of resources are available for those wishing to prepare for the GRE. Upon
registration, ETS provides preparation software called PowerPrep, which contains two
practice tests of retired questions, as well as further practice questions and review material.
Since the software replicates both the test format and the questions used, it can be useful to
predict the actual GRE scores. ETS does not license their past questions to any other
company, making them the only source for official retired material. ETS used to publish the
"BIG BOOK" which contained a number of actual GRE questions; however, this publishing
was abandoned. Several companies provide courses, books and other unofficial preparation
materials.

ETS has claimed that content of the GRE is "un-coachable"; however, many test preparation
companies like Kaplan, Princeton Review, IMS Learning Resources, VISU etc claim that the
test format is so rigid that familiarizing oneself with the test's organization, timing, specific
foci, and the use of process of elimination is the best way to increase a GRE score.[18]

[edit] Testing locations


While the general and subject tests are held at many undergraduate institutions, the computer-
based general test is only held at test centers with appropriate technological accommodations.
Students in major cities in the United States, or those attending large U.S. universities, will
usually find a nearby test center, while those in more isolated areas may have to travel a few
hours to an urban or university location. Many industrialized countries also have test centers,
but at times test-takers must cross country borders.

[edit] Validity
A meta-analysis of the GRE's validity in predicting graduate school success found a
correlation of .30 to .45 between the GRE and both first year and overall graduate GPA. The
correlation between GRE score and graduate school completion rates ranged from .11 (for the
now defunct analytical section) to .39 (for the GRE subject test). Correlations with faculty
ratings ranged from .35 to .50.[19]

[edit] Criticism
Test takers complain about the strict test center rules. For instance, test takers may not use
pens or bring their own scrap paper. Paper and pencils are provided at the testing center. Food
and drink are prohibited in the test centers, as well as chewing gum. Personal items such as
jackets and hats are subject to inspection. However, such rules are relevant to all high stakes
tests, not just the GRE.

[edit] Bias

Critics have claimed that the computer-adaptive methodology may discourage some test
takers, because the question difficulty changes with performance.[citation needed] For example, if
the test-taker is presented with remarkably easy questions half way into the exam, they may
infer that they are not performing well, which will influence their abilities as the exam
continues, even though question difficulty is subjective. By contrast standard testing methods
may discourage students by giving them more difficult items earlier on.

Critics have also stated that the computer-adaptive method of placing more weight on the first
several questions is biased against test takers who typically perform poorly at the beginning
of a test due to stress or confusion before becoming more comfortable as the exam continues.
[20]
Of course standard fixed-form tests could equally be said to be "biased" against students
with less testing stamina since they would need to be approximately twice the length of an
equivalent computer adaptive test to obtain a similar level of precision.[21]

The GRE has also been subjected to the same racial bias criticisms that have been lodged
against other admissions tests. In 1998, the Journal of Blacks in Higher Education noted that
the mean score for black test-takers in 1996 was 389 on the verbal section, 409 on the
quantitative section, and 423 on the analytic, while white test-takers averaged 496, 538, and
564, respectively.[22] Note that simple mean score differences do not constitute evidence of
bias unless the populations are known to be equal in ability, and insisting that group score
difference are direct evidence of a bad test is an extreme position.[23] A more effective,
accepted, and empirical approach is the analysis of differential test functioning, which
examines the differences in item response theory curves for subgroups; the best approach for
this is the DFIT framework. [24]
There is also a bias towards those students who have the financial resources to take privately
owned test-taking classes. These classes do typically result in better scores;[citation needed]
however, many such companies and tutors focus solely on how to use the test's format to
one's advantage, and not how to actually learn the material on the exam.

[edit] Weak predictor of graduate school performance

The GREs are criticized for not being a true measure of whether a student will be successful
in graduate school. Robert Sternberg of Tufts University claimed that the GRE general test
was weakly predictive of success in graduate studies in psychology.[citation needed] The claim of
weak predictability might be related to the mathematics portion of the GRE general test
because a good foundation of mathematics is important in understanding advanced statistics.
However, in some branches of psychology, the application of statistics is minimal.

Empirical research demonstrates validity, however. The ETS published a report ("What is the
Value of the GRE?") that points out the predictive value of the GRE on a student's index of
success at the graduate level.[25] As mentioned earlier, the validity coefficients range from .30
to .45 between the GRE and both first year and overall graduate GPA.[26]

[edit] Historical susceptibility to cheating

In May of 1994, Kaplan, Inc warned ETS, in hearings before a New York legislative
committee, that the small question pool available to the computer-adaptive test made it
vulnerable to cheating. ETS assured investigators that it was using multiple sets of questions
and that the test was secure. This was later discovered to be incorrect.[27]

In December of 1994, prompted by student reports of recycled questions, former Director of


GRE Programs for Kaplan, Inc and current CEO of Knewton, Jose Ferreira led a team of 22
staff members deployed to 9 U.S. to cities to take the exam. Kaplan, Inc then presented ETS
with 150 questions, representing 70-80% of the GRE.[28] According to early news releases,
ETS appeared grateful to Stanley H. Kaplan, Inc for identifying the security problem.
However, on December 31, ETS sued Kaplan, Inc for violating a federal electronic
communications privacy act, copyright laws, break of contract and fraud, and a
confidentiality agreement signed by test-takers on test day. [29] On January 2, 1995, an
agreement was reached out of court.

Additionally, in 1994, the scoring algorithm for the computer-adaptive form of the GRE was
discovered to be insecure. ETS acknowledged that Kaplan, Inc employees, led by Jose
Ferreira, reverse-engineered key features of the GRE scoring algorithms. The researchers
found that a test taker’s performance on the first few questions of the exam had a
disproportionate effect on the test taker’s final score. To preserve the integrity of scores, ETS
revised its scoring and uses a more sophisticated scoring algorithm.

[edit] Plans for the revised GRE


In 2006, ETS announced plans to enact significant changes in the format of the GRE. Planned
changes for the revised GRE included a longer testing time, a departure from computer-
adaptive testing, a new grading scale, and an enhanced focus on reasoning skills and critical
thinking for both the quantitative and qualitative sections.[30]
On April 2, 2007, ETS announced the decision to cancel plans for revising the GRE.[31] The
announcement cited concerns over the ability to provide clear and equal access to the new
test after the planned change as an explanation for the cancellation. They did state, however,
that they do plan "to implement many of the planned test content improvements in the
future", although exact details regarding those changes have not yet been announced.

Changes to the GRE took effect on November 1, 2007, as ETS started to include new types of
questions in the exam. The changes mostly center on "fill in the blank" type answers for both
the mathematics and vocabulary sections that require the test-taker to fill in the blank
directly, without being able to choose from a multiple choice list of answers. ETS currently
plans to introduce two of these new types of questions in each quantitative or vocabulary
section, while the majority of questions will presented in the regular format.[32]

On January 2008, the Reading Comprehension within the verbal sections has been
reformatted, passages' "line numbers will be replaced with highlighting when necessary in
order to focus the test taker on specific information in the passage" to "help students more
easily find the pertinent information in reading passages."[33]

[edit] GRE prior to October 2002


Prior to October 2002, the GRE had a separate Analytical Ability section which tested
candidates on logical and analytical reasoning abilities. This section has now been replaced
by the Analytical Writing portion.

TOEFL

From Wikipedia, the free encyclopedia

Jump to: navigation, search

The Test of English as a Foreign Language (or TOEFL, pronounced "toe-full") evaluates
the ability of an individual to use and understand English in an academic setting. It is an
admission requirement for non-native English speakers at many English-speaking colleges
and universities.

Additionally, institutions such as government agencies, licensing bodies, businesses, or


scholarship programs may require this test. A TOEFL score is valid for two years and then
will no longer be officially reported since a candidate's language proficiency could have
significantly changed since the date of the test. Colleges and universities usually consider
only the most recent TOEFL score.
The TOEFL test is a registered trademark of Educational Testing Service (ETS) and is
administered worldwide. The test was first administered in 1964 and has since been taken by
more than 23 million students. The test was originally developed at the Center for Applied
Linguistics led by the linguist, Dr. Charles A. Ferguson.[1]

Policies governing the TOEFL program are formulated with advice from a 16-member board.
Board members are affiliated with undergraduate and graduate schools, 2-year institutions
and public or private agencies with an interest in international education. Other members are
specialists in the field of English as a foreign or second language.

The TOEFL Committee of Examiners is composed of 12 specialists in linguistics, language


testing, teaching or research. Its main responsibility is to advise on TOEFL test content. The
committee helps ensure the test is a valid measure of English language proficiency reflecting
current trends and methodologies.

Contents
[hide]

• 1 Formats and contents


o 1.1 Internet-Based Test
o 1.2 Paper-Based Test
• 2 Test Scores
o 2.1 Internet-Based Test
o 2.2 Paper-Based Test
• 3 Registration
• 4 References
• 5 Further reading
• 6 See also

• 7 External links

[edit] Formats and contents


[edit] Internet-Based Test

Since its introduction in late 2005, the Internet-Based test (iBT) has progressively replaced
both the computer-based (CBT) and paper-based (PBT) tests, although paper-based testing is
still used in select areas. The iBT has been introduced in phases, with the United States,
Canada, France, Germany, and Italy in 2005 and the rest of the world in 2006, with test
centers added regularly. The CBT was discontinued in September 2006 and these scores are
no longer valid.

Although initially, the demand for test seats was higher than availability, and candidates had
to wait for months, it is now possible to take the test within one to four weeks in most
countries.[2] The four-hour test consists of four sections, each measuring one of the basic
language skills (while some tasks require integrating multiple skills) and all tasks focus on
language used in an academic, higher-education environment. Note-taking is allowed during
the iBT. The test cannot be taken more than once a week.
1. Reading

The Reading section consists of 3–5 passages, each approximately 700


words in length and questions about the passages. The passages are on
academic topics; they are the kind of material that might be found in an
undergraduate university textbook. Passages require understanding of
rhetorical functions such as cause-effect, compare-contrast and
argumentation. Students answer questions about main ideas, details,
inferences, essential information, sentence insertion, vocabulary,
rhetorical purpose and overall ideas. New types of questions in the iBT
require filling out tables or completing summaries. Prior knowledge of the
subject under discussion is not necessary to come to the correct answer.

2. Listening

The Listening section consists of 6 passages, 3–5 minutes in length and


questions about the passages. These passages include 2 student
conversations and 4 academic lectures or discussions. A conversation
involves 2 speakers, a student and either a professor or a campus service
provider. A lecture is a self-contained portion of an academic lecture,
which may involve student participation and does not assume specialized
background knowledge in the subject area. Each conversation and lecture
stimulus is heard only once. Test takers may take notes while they listen
and they may refer to their notes when they answer the questions. Each
conversation is associated with 5 questions and each lecture with 6. The
questions are meant to measure the ability to understand main ideas,
important details, implications, relationships between ideas, organization
of information, speaker purpose and speaker attitude.

3. Speaking

The Speaking section consists of 6 tasks, 2 independent tasks and 4


integrated tasks. In the 2 independent tasks, test takers answer opinion
questions on familiar topics. They are evaluated on their ability to speak
spontaneously and convey their ideas clearly and coherently. In 2 of the
integrated tasks, test takers read a short passage, listen to an academic
course lecture or a conversation about campus life and answer a question
by combining appropriate information from the text and the talk. In the 2
remaining integrated tasks, test takers listen to an academic course
lecture or a conversation about campus life and then respond to a
question about what they heard. In the integrated tasks, test takers are
evaluated on their ability to appropriately synthesize and effectively
convey information from the reading and listening material. Test takers
may take notes as they read and listen and may use their notes to help
prepare their responses. Test takers are given a short preparation time
before they have to begin speaking.

4. Writing
The Writing section measures a test taker's ability to write in an academic
setting and consists of 2 tasks, 1 integrated task and 1 independent task.
In the integrated task, test takers read a passage on an academic topic
and then listen to a speaker discuss the same topic. The test taker will
then write a summary about the important points in the listening passage
and explain how these relate to the key points of the reading passage. In
the independent task, test takers must write an essay that states, explains
and supports their opinion on an issue, supporting their opinions or
choices, rather than simply listing personal preferences or choices.
Approx.
Task iBT
time

3–5 passages, each containing 12–14 60–100


READING
questions minutes

LISTENIN 6–9 passages, each containing 5–6 60–90


G questions minutes

BREAK 10 minutes

SPEAKING 6 tasks and 6 questions 20 minutes

WRITING 2 tasks and 2 questions 55 minutes

It should be noted that one of the sections of the test will include extra, uncounted material.
Educational Testing Service includes extra material in order to pilot test questions for future
test forms. When test-takers are given a longer section, they should give equal effort to all of
the questions because they do not know which question will count and which will be
considered extra. For example, if there are four reading passages instead of three, then three
of those passages will count and one of the passages will not be counted. Any of the four
passages could be the uncounted one.

[edit] Paper-Based Test

In areas where the iBT is not available, a paper-based test (PBT) is given. Test takers must
register in advance either online or by using the registration form provided in the
Supplemental Paper TOEFL Bulletin. They should register in advance of the given deadlines
to ensure a place because the test centers have limited seating and may fill up early. Tests are
administered on fixed dates 6 times each year.

The test is 3 hours long and all test sections can be taken on the same day. Students can take
the test as many times as they wish. However, colleges and universities usually consider only
the most recent score.

1. Listening (30–40 minutes)

The Listening section consists of 3 parts. The first one contains 30


questions about short conversations. The second part has 8 questions
about longer conversations. The last part asks 12 questions about lectures
or talks.

2. Structure and Written Expression (25 minutes)

The Structure and Written Expression section has 15 exercises of


completing sentences correctly and 25 exercises of identifying errors.

3. Reading Comprehension (55 minutes)

The Reading Comprehension section has 50 questions about reading


passages.

4. Writing (30 minutes)

The Writing section is one essay with 250–300 words in average.

[edit] Test Scores


[edit] Internet-Based Test

• The iBT version of the TOEFL test is scored on a scale of 0 to 120 points.
• Each of the four sections (Reading, Listening, Speaking, and Writing)
receives a scaled score from 0 to 30. The scaled scores from the four
sections are added together to determine the total score.
• Speaking is initially given a score of 0 to 4, and writing is initially given a
score of 0 to 5. These scores are converted to scaled scores of 0 to 30.

[edit] Paper-Based Test

• The final PBT score ranges between 310 and 677 and is based on three
subscores: Listening (31–68), Structure (31–68), and Reading (31–67).
Unlike the CBT, the score of the Writing section (referred to as the Test of
Written English, TWE) is not part of the final score; instead, it is reported
separately on a scale of 0–6.
• The score test takers receive on the Listening, Structure and Reading
parts of the TOEFL test is not the percentage of correct answers. The score
is converted to take into account the fact that some tests are more
difficult than others. The converted scores correct these differences.
Therefore, the converted score is a more accurate reflection of the ability
than the correct answer score is.

Most colleges use TOEFL scores as only one factor in their admission process. A sampling of
required TOEFL admissions scores shows that a total score of 74.2 for undergraduate
admissions and 82.6 for graduate admissions may be required. It is recommended that
students check with their prospective institutions directly to understand TOEFL admissions
requirements.

ETS has released tables to convert between iBT, CBT and PBT scores.
[edit] Registration
• The first step in the registration process is to obtain a copy of the TOEFL
Information Bulletin. This bulletin can be obtained by downloading it or
ordering it from the TOEFL website.
• From the bulletin, it is possible to determine when and where the iBT
version of the TOEFL test will be given.
• Procedures for completing the registration form and submitting it are
listed in the TOEFL Information Bulletin. These procedures must be
followed exactly.

Graduate Management Admission Test

From Wikipedia, the free encyclopedia

Jump to: navigation, search

GMAT redirects here, for other uses, see GMAT (disambiguation).


The Graduate Management Admission Test (GMAT, pronounced G-mat, [dʒiː.mæt]) is a
computer adaptive standardized test in mathematics and the English language for measuring
aptitude to succeed academically in graduate business studies. Business schools commonly
use the test as one of many selection criteria for admission into an MBA program. It is
delivered via computer at various locations around the world. In those international locations
where an extensive network of computers has not yet been established, the GMAT is offered
either at temporary computer-based testing centers on a limited schedule or as a paper-based
test (given once or twice a year) at local testing centers. As of August 2009, the fee to take
the test is U.S. $250 worldwide.[1]

Contents
[hide]

• 1 The Test
o 1.1 Analytical Writing Assessment
o 1.2 Quantitative Section
 1.2.1 Problem Solving
 1.2.2 Data Sufficiency
o 1.3 Verbal Section
 1.3.1 Sentence Correction
 1.3.2 Critical Reasoning
 1.3.3 Reading Comprehension
• 2 Total Score
• 3 Required Scores
• 4 History of the Graduate Management Admission
Test
• 5 Registration and preparation
• 6 Other notes
• 7 See also
• 8 References

• 9 External links

[edit] The Test


The exam measures verbal, mathematical, and analytical writing skills that the examinee has
developed over a long period of time in his/her education and work. Test takers answer
questions in each of the three tested areas, and there are also two optional breaks;[2] in
general, the test takes about four hours to complete.

Scores are valid for five years (at most institutions) from the date the test taker sits for the
exam until the date of matriculation (i.e. acceptance, not until the date of application).

The maximum score that can be achieved on the exam is 800. Over the past 3 years, the mean
score has been 535.2.[3]

The Analytical Writing Assessment (AWA) section is the first section to be answered. Then
the Quantitative section and the Verbal Ability section follow respectively.
[edit] Analytical Writing Assessment

The Analytical Writing Assessment (AWA) section of the test consists of two essays. In the
first, the student must analyze an argument and in the second the student must analyze an
issue. Each essay must be written within 30 minutes and is scored on a scale of 0-6. The
essay is read by two readers who each mark the essay with a grade from 0-6, in 0.5 point
increments with a mean score of 4.1. If the two scores are within one point of each other, they
are averaged. If there is more than one point difference, the essays are read by a third reader.
[4]

The first reader is Intellimetric, a proprietary computer program developed by Vantage


Learning, which analyzes creative writing and syntax of more than 50 linguistic and
structural features.[5] The second and third readers are humans, who evaluate the quality of
the examinee's ideas and his or her ability to organize, develop, and express ideas with
relevant support. While mastery of the conventions of written English factor into scoring,
minor errors are expected, and evaluators are trained to be sensitive to examinees whose first
language is not English.[4]

Each of the two essays in the Analytical Writing part of the test is graded on a scale of 0 (the
minimum) to 6 (the maximum):

• 0 An essay that is totally illegible or obviously not written on the assigned


topic.
• 1 An essay that is fundamentally deficient.
• 2 An essay that is seriously flawed.
• 3 An essay that is seriously limited.
• 4 An essay that is merely adequate.
• 5 An essay that is strong.
• 6 An essay that is outstanding.

[edit] Quantitative Section

The quantitative section consists of 37 multiple choice questions, which must be answered
within 75 minutes. There are two types of questions: problem solving and data sufficiency.
The quantitative section is scored from 0 to 60 points. Over the past 3 years, the mean score
has been 35.6/60; scores above 50 and below 7 are rare.[3][6]

[edit] Problem Solving

This tests the quantitative reasoning ability. Problem-solving questions present multiple-
choice problems in arithmetic, basic algebra, and elementary geometry. The task is to solve
the problems and choose the correct answer from among five answer choices. Some problems
will be plain mathematical calculations; the rest will be presented as real life word problems
that will require mathematical solutions.

Numbers: All numbers used are real numbers.

Figures: The diagrams and figures that accompany these questions are for
the purpose of providing useful information in answering the questions.
Unless it is stated that a specific figure is not drawn to scale, the diagrams
and figures are drawn as accurately as possible. All figures are in a plane
unless otherwise indicated.

[edit] Data Sufficiency

This tests the quantitative reasoning ability using an unusual set of directions. The examinee
is given a question with two associated statements that provide information that might be
useful in answering the question. The examinee must then determine whether either statement
alone is sufficient to answer the question; whether both are needed to answer the question; or
whether there is not enough information given to answer the question.

Data sufficiency is a unique type of math question created especially for the GMAT. Each
item consists of the questions itself followed by two numbered statements.

(A) If statement 1 alone is sufficient to answer the question, but statement


2 alone is not sufficient.

(B) If statement 2 alone is sufficient to answer the question, but statement


1 alone is not sufficient.

(C) If both statements together are needed to answer the question, but
neither statement alone is sufficient.

(D) If either statement by itself is sufficient to answer the question.

(E) If not enough facts are given to answer the question.

Perhaps the easiest way to fully internalize the scope of these questions is to replace the word
“is” with the words “must be” – the questions are not asking whether an answer is possible,
but rather, whether it "must" be the case.

[edit] Verbal Section

The verbal section consists of 41 multiple choice questions, which must be answered within
75 minutes. There are three types of questions: sentence correction, critical reasoning, and
reading comprehension. The verbal section is scored from 0 to 60 points. Over the past 3
years, the mean has been 27.8/60; scores above 44 and below 9 are rare.[3][6]

[edit] Sentence Correction

The Sentence Correction section tests a test taker's knowledge of American English grammar,
usage, and style.

Sentence correction items consist of a sentence, all or part of which has been underlined, with
five associated answer choices listed below the sentence. The first answer choice is exactly
the same as the underlined portion of the sentence. The remaining four answer choices
contain different phrasings of the underlined portion of the sentence. The test taker is
instructed to choose the first answer choice if there is no flaw with that phrasing of the
sentence. If there is a flaw with the original phrasing of the sentence, the test taker is
instructed to choose the best of the four remaining answer choices.[7]
Sentence Correction questions are designed to measure a test taker's proficiency in three
areas: correct expression, effective expression, and proper diction. [7] Correct expression
refers to the grammar and structure of the sentence. Effective Expression refers to the clarity
and concision used to express the idea. Proper Diction refers to the suitability and accuracy
of the chosen words in reference to the dictionary meaning of the words and the context in
which the words are presented.[7]

[edit] Critical Reasoning

This tests logical thinking. Critical thinking items present an argument that the test taker is
asked to analyze. Questions may ask test takers to draw a conclusion, to identify assumptions,
or to recognize strengths or weaknesses in the argument. It presents brief statements or
arguments and ask to evaluate the form or content of the statement or argument. Questions of
this type ask the examinee to analyze and evaluate the reasoning in short paragraphs or
passages. For some questions, all of the answer choices may conceivably be answers to the
question asked. The examinee should select the best answer to the question, that is, an answer
that does not require making assumptions that violate common sense standards by being
implausible, redundant, irrelevant, or inconsistent.

[edit] Reading Comprehension

This tests the ability to read critically. Reading comprehension questions relate to a passage
that is provided for the examinee to read. The passage can be about almost anything, and the
questions about it test how well the examinee understands the passage and the information in
it. As the name implies, it tests the ability of the examinee to understand the substance and
logical structure of a written selection. The GMAT uses reading passages of approximately
200 to 350 words, covering topics from social sciences, biological sciences, physical
sciences, and business. Each passage has three or more questions based on its content. The
questions ask about the main point of the passage, about what the author specifically states,
about what can be logically inferred from the passage, and about the author's attitude or tone.

[edit] Total Score


The "Total Score", composed of the quantitative and verbal sections, is exclusive of the
analytical writing assessment (AWA), and ranges from 200 to 800. About two-thirds of test
takers score between 400 and 600. The score distribution resembles a bell curve with a
standard deviation of approximately 100 points, meaning that the test is designed for 68% of
examinees to score between 400 and 600, while the median score was originally designed to
be near 500. The 2005/2006 mean score was 533.[8]

The quantitative and verbal sections comprise a computer-adaptive test. The first question
may be difficult. The next few questions in each section may be around the 500 level. If the
examinee answers correctly, the next questions are harder. If the examinee answers
incorrectly, the next questions are easier. The questions are pulled from a large pool of
questions and delivered depending on the student's running score. These questions are
regularly updated to prevent them from being compromised by students recording questions.

The final score is not based solely on the last question the examinee answers (i.e. - the level
of difficulty of questions reached through the computer-adaptive presentation of questions).
The algorithm used to build a score is more complicated than that. The examinee can make a
silly mistake and answer incorrectly and the computer will recognize that item as an anomaly.
If the examinee misses the first question his score will not necessarily fall in the bottom half
of the range.

Also, questions left blank (that is, those not reached) hurt the examinee more than questions
answered incorrectly. This is a major contrast to the SAT, which has a wrong-answer penalty.
Each test section also includes several experimental questions, which do not count toward the
examinee's score, but are included to judge the appropriateness of the item for future
administrations.

Verbal and Quantitative Section scores range from 0 to 60. Analytical Writing Assessment
scores range from 0 to 6 and represent the average of the ratings from the two GMAT essays.
The essays are scored differently from the Verbal and Quantitative sections and are not
included in the total score.

All scores and cancellations in the past 5 years will be on a student's score report, a change
from the previous policy of the last three scores and cancellations being kept on the score
report.[citation needed]

[edit] Required Scores


Most schools do not publish a minimum acceptable score or detailed statistics about the
scores achieved by applicants. However, schools do generally publish the average and
median score of their latest intake, which can be used as a guide.

The average score for nearly all of the top business schools, as commonly listed in popular
magazines and ranking services, is in the upper 600s or low 700s.

It may be possible to overcome a low test score with impressive real world accomplishments,
good undergraduate performance, outstanding references and/or connections, particularly
strong application essays, or coming from an under represented group.

[edit] History of the Graduate Management Admission


Test
In 1953, the organization now called the Graduate Management Admission Council (GMAC)
began as an association of nine business schools, whose goal was to develop a standardized
test to help business schools select qualified applicants. In the first year it was offered, the
assessment (now known as the Graduate Management Admission Test), was taken just over
2,000 times; in recent years, it has been taken more than 200,000 times annually. Initially
used in admissions by 54 schools, the test is now used by more than 1,500 schools and 1,800
programs worldwide.

After 2005, GMAC is administrating the exam. On January 1, 2006, GMAC transitioned
vendors to a combination of ACT Inc, which develops the test questions and CAT software,
and Pearson Vue, which delivers the exam at testing centers worldwide.
On June 23, 2008, a cheating scandal was acknowledged by GMAC involving some 6,000
prospective MBA students who subscribed to the website ScoreTop.com and may have
viewed "live" questions in-use on the GMAT. GMAC has announced severe measures that
include invalidating the scores of subscribers, notifying schools who have received their
scores, and banning them from future tests. On June 27, GMAC reassured applicants that
only those who knowingly cheated using Scoretop's website would be affected. [9] The Wall
Street Journal later reported that the scores of 84 test takers were canceled in the wake of the
scandal. [10]

Also, in response to cases of "proxy" or "ringer" test-taking, where students pay somebody
else to take the test on their behalf, GMAC is going to be introducing Fujitsu PalmSecure
(the palm vein scanning technology) at testing centers this year. Centers in Korea and India
will be getting the palm scanning devices first, followed by the United States in Fall of 2008.
GMAC plans to have them integrated at all testing centers by May 2009.[11]

GMAC has announced plans for a Next Generation GMAT set to launch in 2013.
International differences will be taken into consideration more strongly. [12]

[edit] Registration and preparation


The test taker can register in either of the following two ways:

• Online at mba.com by credit card


• By calling one of the test centers listed on mba.com

To schedule a test, an appointment must be made at one of the designated test centers.

Third-party companies have different test preparation options available, which may include
self-study using GMAT books, classroom GMAT preparation courses (live or online), or
private tutoring.