Textual difficulty

From Wikipedia, the free encyclopedia
(Redirected from Readability test)
Jump to: navigation, search

Textual difficulty means how easy or hard a text is to read. Research has shown that two main factors affect the ease with which texts are read.[1]

  1. How difficult the words are: this is lexical difficulty. Rare words are less well known than common words. Rare, difficult words are often longer than common, easy words.
  2. How difficult the sentences are: this is syntactical difficulty. Long, complicated sentences cause more difficulty than short, simple sentences.

Readability predictions[change | edit source]

A readability test is a way to measure a text for how easy it is to read. Readability tests give a prediction as to how difficult readers will find a particular text. They do this by measuring one or both of the two main causes, as follows:

Word difficulty[change | edit source]

Word difficulty is usually measured by vocabulary lists or word length.

Vocabulary lists[change | edit source]

Several vocabulary lists have been published by researchers. These lists are based on samples of published texts in English, and (less often) samples of recorded spoken language. The lists differ slightly according to the sources chosen, but they are very reliable.[2][3][4] The items listed may represent more than one actual word; they are lemmas. For instance the entry "be" contains within it the occurrences of "is", "was", "be" and "are".[5] The top 100 lemmas account for 50% of all the words in the Oxford English Corpus.[6]

The Reading Teachers Book of Lists claims that the first 25 words make up about one-third of all printed material in English, and that the first 100 make up about one-half of all written material.[7]

One of the first readability tests, the Dale–Chall formula, used a vocabulary list. It counted the number of listed words in a passage, and applied a formula which gave a grade level. It was used to rate textbooks for grade levels in US school districts.

It is easy, in principle, to use a vocabulary list as part of a computer-based readability measure. The list is organised as a look-up table. The percentage of listed words in a passage gives the data for the formula, and the user is presented with a grade level.

Word length[change | edit source]

This is called an index, or a proxy.[8] This is because word length is correlated with word frequency, and word frequency is correlated with word difficulty. Longer words are, on average, harder than short words.

Word length is measured by counting the letters in each word, or by counting syllables. Since most syllables have one vowel, some computer programs count vowels per average word. A few tests measure the percentage of words on a list; the list is based on the known frequency of words in a language.

Sentence difficulty[change | edit source]

Sentence difficulty is usually measured by sentence length. This again is an index, because longer sentences are, on average, harder than short sentences. Computers count the number of words between full stops, but this is a second-best method. Humans can judge whether a semi-colon or colon should count as the end of a sentence for testing purposes.

Since both factors may vary independently of each other, the best prediction is gained by devising a formula with makes use of both indices. What this means is that a single score is produced for a text, and that score is looked up on a table or graph. That tells you how difficult the text is in terms of either a) an American school grade level, or b) an artificial scale of 0% to 100%. Either way is effective. What really makes a difference is:

  • Methods using both indices are more reliable than methods using only one index.[1]

Direct measurement[change | edit source]

It is possible to get a good prediction by getting a group of subjects to read through a passage, followed by multiple-choice questions. Even better is a method called cloze, where subjects fill in blanks on a text they have not seen before. The percentage of correctly completed blanks is an outstandingly good predictor of text difficulty.[9]

Naturally, this kind of direct measure requires subjects, and a skilled experimenter. It also requires the prior preparation of texts suitable for the chosen sample of subjects. The method is therefore too expensive for widespread use.

Types of tests[change | edit source]

A person can perform readability tests himself by counting and doing some math, or by using word-processing software.

Tests on subjects[change | edit source]

  • Multiple-choice questions
  • Cloze test

Test on texts[change | edit source]

Two-variable formulae[change | edit source]

One-variable formulae[change | edit source]

Related pages[change | edit source]

References[change | edit source]

  1. 1.0 1.1 Klare G.R. 1963. The measurement of readability. Iowa State University Press, Ames IA.
  2. In this context, 'reliable' means something like: if the research was repeated, you would get a very similar result.
  3. 500 most common words: [1]
  4. Top 1000 words: [2]
  5. Benjamin Zimmer: Time after time after time... Language Log. Retrieved 22 June 2006.
  6. AskOxford.com: Language Facts. Retrieved 22 June 2006.
  7. First 100 words: [3]
  8. A proxy is one person or thing standing for another.
  9. Taylor W.L. 1953. Cloze procedure: a new tool for measuring readability. Journalism Quarterly, 30, 415-433.

Other websites[change | edit source]

  • Is Wikipedia too difficult? comparative analysis of Wikipedia, Simple Wikipedia and Britannica. [4]
  • Developer Gnome
  • Writing Sample Analyzer, reports on the Flesch Reading Ease, Fog Scale Level, and Flesch–Kincaid Grade Level for a given piece of text.
  • Online Textual Difficulty Calculator - reports ARI, SMOG, Flesch–Kincaid Readability Test, Coleman–Liau Index, Gunning–Fog Index, etc.