Readability by Zettlr

This page provides four different readability algorithms, implemented using the Open Source Markdown Editor Zettlr. Not all have been implemented, as many make use of syllable counts, whereas these implementations of the algorithms try to be language-agnostic. Therefore, only four algorithms—Dale-Chall, Gunning-Fog, Coleman-Liau and the Automated Readability Index (ARI)—are left.

View the Algorithm Explanation

All of these examples, except Coleman-Liau, further depend upon a count of so-called "complex" or "difficult" words. Mostly, difficult words are defined within a dictionary shipped with the algorithm. The main problem with difficult words therefore is that they are not language-agnostic, but, on the opposite, very much biased towards the English language. Complex words, on the other hand, are defined to consist of more than three or four syllables. This, too, is not agnostic towards the language and would require both algorithms to calculate the syllables for each word as well as for detecting the language used in the text. Both are ill-designed when it comes to what Zettlr wants to offer: A fast preview of the readability of your text that can give you a first measure of how readable your text is.

To account for these problems, this implementation of the aforementioned algorithms re-defines both "complex words" as well as "difficult words". Zettlr depends upon a finding by Coleman and Liau from their seminal paper A Computer Readability Formula Designed for Machine Scoring:

"There is no need to estimate syllables since word length in letters is a better predictor of readability than word length in syllables."
Therefore, Zettlr defines complex and difficult words as being longer than two times the standard deviation from the average word length. Two measurements are important here: The average word length and the standard deviation.

The average word length would mark all words as difficult/complex that are longer than this average. This bears several problems. First, the bigger the difference in length between the longest word and the average word is, the less the amount of difficult words will become. One sentence containing 15 normal words and one extraordinarily long word would therefore receive a better score than a sentence containing only words of roughly the same length. An additional problem is the opposite case: If all words in a given sentence are of the same length, the sentence would receive unreasonably bad results.

The explanation of this phenomenon is called variance. Each language has different variances in character lengths. While the German language, for instance, has both very long words and a huge variance of different word lengths , the English language has rather short words and, additionally, only a small variance. To realise the idea of a language-agnostic algorithm, we therefore needed to take this variance into account. It is a reasonable statistical assumption for readers of greater education (at whom this editor is aimed) to perceive only about 5 percent of words in a given text to be difficult. Therefore the algorithm only counts words as being difficult that are longer than the average word length plus two times the standard deviation of the word length. This will leave out only 5 percent of words and has, in tests, proven to be a reasonable choice.

In the editor below, you will find the full text of Herman Melville's monumental story "Bartleby, The Scrivener". It has been chosen as a reliable indicator for the readability, as it provides both very easy sentences and some sentences difficult to comprehend. While switching through different algorithms you will notice some differences in the usefulness of algorithms:

Current algorithm: Dale-Chall