Synesethesia is defined as:

"A condition in which normally separate senses are not separate. Sight may mingle with sound, taste with touch, etc. The senses are cross-wired." (

Or, in other words, it is described by msn encarta as:

1. physiology sensation felt elsewhere in body: the feeling of sensation in one part of the body when another part is stimulated

2. psychology stimulation of one sense alongside another: the evocation of one kind of sense impression when another sense is stimulated, e.g. the sensation of color when a sound is heard

3. literature rhetorical device: in literature, the description of one kind of sense perception using words that describe another kind of sense perception, as in the phrase "shining metallic words" ( literary )

I find synesthesia interesting when it comes to language. Let's explore a few research studies that might be applied to language-related synesthesia.

Bouba and Kiki

booba and kiki

According to Wolfgang Köhler's "booba/kiki effect" study, 95% to 98% of people choose "kiki" for the sharp, angular shape and "booba" (or "bouba") for the soft, rounded shape. The subjects in this experiment were apparently hearing. As a visual-manual speaker without hearing, I still can clearly identify the booba and kiki. To verify my hypothesis, I have tested on some deaf subjects and they appeared to be the same result despite their hearing loss.

Furthermore and interestingly, as I fingerspelled these visual-phonetic words, I discovered that in manual alphabet, the handshapes K and I are also angular and sharp, whereas the handshapes B and O are round and gentle. As research shows that sense is synesthetic, this kiki-booba in fingerspelling suggests that perception is synesthetic and language is intermodal.

Brain and Language

These vocal-auditory and visual-manual modalities are quite opposite. Yet, the activities in brain of these languages, English and Ameslan (ASL) for example in different communication modes, are similar.

Research studies showed that, when processing ASL sentences, the prelingually (native) signers (deaf or hearing), who learned ASL from birth use the same cerebral activity in the brain as hearing people. In addition, viewing or processing ASL sentences also activated the right cerebral hemisphere (and it is also true for reading Asian characters).

This shows that the regions of the left brain, which is responsible for language, is not based on sound-speech, as it has been previously believed to be. Regardless of the modalities (vocally speaking, manually speaking, or writing), language appears to be also synesthetic as well as intermodal.

Suggested references/readings

Newman, A.J., Bavelier, D., Corina, D., Jezzard, P. and Neville, H.J. A critical period for right hemisphere recruitment in American Sign Language processing. Nature Neuroscience, 5:76-80, 2002.

You may be also interested in phonasthesia or sound symbolism in sign language.