Newborn, week four
Not much had changed, except for the increasing length of time the baby Juli had stayed awake and alert, but vision might be still blurry.
On the surface, it looked as if there wasn't much going on with language development in the first few weeks. But, on a closer look, there was indeed something emerging new every week, at least with a small step after another.
Juli gazed at my face longer and more focused, which allowed for face-to-face interaction, and in turn, which created the beginning of language acquisition.
That is, making eye contact or mutual gaze is the very first step in a linguistic interaction that would build long toward a budding period of manual babbling that would later lead to first words in a signed language.
Children acquire language through social interaction in early childhood, and children generally articulate well when they are around three years old.
Eye contact is often said to play a critical role in interaction between adults and babies. Further, it plays a bigger critical role in language development in sign language.
At this time Juli could gaze at me longer with more focus, possibly because her vision had matured a bit more. With her longer gazes, I was able to interact more and talked in ASL with her whenever I could.
"It's bath time." "You're feeling cold, brr." "Grandmother and grandfather are coming over to see you." "Burp done?" These are typical daily talks, of course, in ASL (not these English translations).
There's a Critical Time for Learning All Languages Including Sign Language
Neuroscientists examining the brain activity of people who learned to speak American Sign Language at different times in their lives have found the first evidence that that is a critical period for acquiring non-verbal language, just as there is for spoken languages.
Source: University of Washington, January 3, 2002. Reprint permission.
Neuroscientists examining the brain activity of people who learned to speak American Sign Language (ASL) at different times in their lives have found the first evidence that there is a critical period for acquiring a non-verbal language, just as there is for spoken languages.
Using functional magnetic resonance imaging (fMRI), the researchers discovered patterns of brain activity in bilingual people who learned ASL before puberty differed from those who learned it after puberty.
The findings are reported in this month's issue of the journal Nature Neuroscience. They indicate there are regions in the brain's right hemisphere that are activated when children who learned ASL before puberty are reading sign language. The brains of children who learned ASL after puberty show significantly less right hemisphere activity when they are doing the same activity.
There is widespread acceptance among neuroscientists that there is a critical period for first language acquisition, and that children who are not exposed to any language before puberty, or perhaps sooner, are unable to fully acquire and use the principles of language. There also is evidence of similar critical periods for acquiring a second language.
"We know that late learners of ASL, while they are very fluent, never will be fully fluent like native, or early, learners of ASL," said David Corina, a University of Washington associate professor of psychology and a co-author of the study. Corina is fluent in ASL.
"One aspect of ASL that is difficult for late learners is verb signs of motion. You see some subtle errors in their use of these verbs, just as you might detect subtle grammatical differences when listening to bilingual users of a spoken language when they are not using their native tongue."
Other members of the research team are Aaron Newman, a University of Oregon doctoral student; Helen Neville, University of Oregon psychology professor; Daphne Bavelier, assistant professor of brain and cognitive sciences at the University of Rochester, and Peter Jezzard, a physicist at John Radcliffe Hospital in England.
The new study builds on earlier work by this research team showing that right hemisphere activity, along with activation in the left hemisphere, is necessary for processing ASL. The left hemisphere activity has long been associated with the processing of spoken languages.
"One area of the brain that is the signature, or specific, to signers if they learned ASL as a native signer, is the right angular gyrus," Corina said.
It is located at the juncture of the temporal and parietal lobes. Activation of the left angular gyrus has been associated with reading English and other spoken languages for many years. The new study shows consistent activation of the right angular gyrus among native signers and some, but not consistent, activation of that brain region among late signers.
The study involved 27 bilingual subjects. Sixteen were hearing persons born to deaf parents. They learned ASL and English from birth as native languages. The remaining 11 were the late learners who had English as their native language and learned ASL after puberty, in early adulthood. All of the subjects watched a screen while their brains were imaged using fMRI and were asked to read written English sentences and meaningless strings of consonants. They also were shown and asked to read ASL sentences and meaningless gestures that were similar to real ASL signs.
"This work is important because we want to understand the neural systems underlying language," said Corina. "We want to know if they are malleable or fixed and the degree to which they may vary in different languages. We now know there is activation in the right hemisphere when native signers view ASL, and to see that this is dependent on early exposure suggests there are specific times when neural systems for language may be particularly sensitive to change."
He added that the research has implications for early education of all children because it stressed the need for early language exposure at critical times in development. And now, it is equally important in education for the deaf to ensure linguistic competency in ASL.
The National Institute of Deafness and Communicative Disorders, the National Sciences and Engineering Council of Canada, the Charles A. Dana Foundation and a University of Oregon post-graduate scholarship funded the research.
Language acquisition for deaf children
Reducing the harms of zero tolerance to the use of alternative approaches
Children acquire language without instruction as long as they are regularly and meaningfully engaged with an accessible human language. Today, 80% of children born deaf in the developed world are implanted with cochlear devices that allow some of them access to sound in their early years, which helps them to develop speech.
However, through early childhood, brain plasticity changes and children who have not acquired a first language in the early years might never be completely fluent in any language. If they miss this critical period for exposure to a natural language, their subsequent development of the cognitive activities that rely on a solid first language might be underdeveloped, such as literacy, memory organization, and number manipulation.
An alternative to speech-exclusive approaches to language acquisition exists in the use of sign languages such as American Sign Language (ASL), where acquiring a sign language is subject to the same time constraints of spoken language development. Unfortunately, so far, these alternatives are caught up in an "either-or" dilemma, leading to a highly polarized conflict about which system families should choose for their children, with little tolerance for alternatives by either side of the debate and widespread misinformation about the evidence and implications for or against either approach.
The success rate with cochlear implants is highly variable. This issue is still debated, and as far as we know, there are no reliable predictors for success with implants.
Yet families are often advised not to expose their child to sign language. Here absolute positions based on ideology create pressures for parents that might jeopardize the real developmental needs of deaf children.
What we do know is that cochlear implants do not offer accessible language to many deaf children. By the time it is clear that the deaf child is not acquiring spoken language with cochlear devices, it might already be past the critical period, and the child runs the risk of becoming linguistically deprived.
Linguistic deprivation constitutes multiple personal harms as well as harms to society (in terms of costs to our medical systems and in loss of potential productive societal participation).
Copyright by the authors: Tom Humphries, Poorna Kushalnagar, Gaurav Mathur, Donna Jo Napoli, Carol Padden, Christian Rathmann, Scott R Smith.Credits/Source: Harm Reduction Journal 2012, 9:16. Published: 2 April 2012. http://www.harmreductionjournal.com/content/9/1/16/abstract