Does the quirky spelling in English actually make it easier to read?

I just finished reading the question asked by Bobnix, in which RegDwight referred to another question with an interesting answer by Kosmonaut. Kosmonaut refers to the great number of pictograms (Kanji or Hanzi) available in Japanese and Chinese, and mentions that the task of memorizing our weirdo spellings pales in comparison to learning vocabulary in one of those languages.

That got me to thinking. When I first started studying Japanese, I first learned the two written versions of the syllabary, hiragana and katakana. And when faced with the formidable task of memorizing thousands of characters and their various readings, I wondered why, given the phonetic language, Japanese still stuck with all those originally Chinese characters. Were they just masochists?

But I dug in, and as I learned more and more kanji a strange thing happened. I realized it was actually easier to read the language with the kanji than without them, because so many Japanese words sound alike (or at least their parts do) and to render them in hiragana would force me to slow down and try to figure out which ほう (hou) they meant: 保, 俸, 倣, 剖, 報, 方, 法 or any of the others. Learning the more complicated writing method actually let me read faster, and to understand words almost pre-apprehensively. By that I mean something a little like looking at the hands of an analog clock and understanding the time without relating it to a numerical equivalent.

Now for English. We have sound-alike words like to, two, and too (or even tu, if you count Shakespeare's imagining of Julius Caesar's dying line). If we went to a strict phonetic spelling system, all those would be spelled the same. I think there are cases where such a thing would actually slow us down. And it may be that the more difficult and idiosyncratic the spelling is, the more likely we are (as Kosmonaut said) to remember it. Further, having remembered it may mean we are more likely to recognize it more easily. Or something like that.

This is just a supposition on my part. It has plausibility and feels right to me, but that doesn't mean it is right. I'd be interested if anyone knows of any information or research done on either side of this argument.


Solution 1:

Your assumption is correct. Natural languages are extremely redundant and compressible in sound as well as in orthography, and this has significant and obvious benefits: you can understand obscured speech, read obscured text, and, yes, get the sense of a word based on a quick visual hook rather than relying on a purely phonetic transcription.

English orthography reflects its countless generations of development. The spelling of a word may not correspond perfectly to its pronunciation, but to select a spelling that does correspond to a specific pronunciation naturally excludes some others. I've heard native speakers, for instance, who have different pronunciations for all of "to", "too", and "two".

Further, since the orthography often reflects the etymology, you can often make an educated guess about the meaning of a word you don't know based on the union of its visual and phonological components. If they were collapsed into one, you'd lose that extra information. This is just like how in hanzi there's often a phonetic component as well as a semantic component, and this does carry over somewhat to kanji even though the pronunciation is adapted to Japanese phonology.

These are all reasons why English spelling reform has never caught on, and likely will never do so. It's too widespread, and there are simply too many factors to take into account. Every language has its idiosyncracies, and to see them as flaws or try to fight them is sheer folly.

Solution 2:

We have sound-alike words like to, two, and too (or even tu, if you count Shakespeare's imagining of Julius Caesar's dying line). If we went to a strict phonetic spelling system, all those would be spelled the same. I think there are cases where such a thing would actually slow us down.

This seems like a red herring to me, for several reasons.

First, most of the bizarre spelling in English is not that useful for disambiguating homophones.

Second, even if your point stands, does it really argue that spelling irregularity is good? Or does it argue that homophones/homographs are bad?

Third, I very seriously doubt that English homophones and homographs are really all that inconvenient in the first place. Note that to already has two apparently unrelated meanings: 1) the infinitive-marking to in to err is human; and 2) the preposition to in to the store. You used both senses in your question. How much do you think that slowed readers down? I don’t think it slowed me down at all. I seem to have disambiguated each one subconsciously and instantaneously. It is hard to imagine what could have gone smoother.

By contrast, irregular spelling is a clear and present pain in the butt. It happens that my kids are learning to spell right now, so I am biased, but I think English spelling carries a lot of historical baggage that really serves no purpose whatsoever.

A comparison. In Spanish, to pluralize a noun, you add an s at the end. In English, it’s exactly the same... unless the singular ends with a consonant followed by y, in which case you drop the y and add ies, or it ends with s, x, z, sh, or ch, in which case you add es (unless the ch does not actually make a sibilant sound, as in stomach or loch, in which case just s will do), or it ends with a consonant followed by o, like potato, in which case you also add es (unless it is Italian or Spanish in origin, like piano or flamingo, or it just happens to be one of those words like bozo or banjo, in which case just s will do), or it’s irregular, like child, in which case you just have to know it. Your position, as far as I can tell, is that maybe people therefore have an easier time reading English plurals than Spanish ones. That makes no sense to me.