Cover Story 6/15/98
BY SHANNON BROWNLEE Inside a small, dark booth, 18-month-old Karly Horn sits on her mother Terry's lap. Karly's brown curls bounce each time she turns her head to listen to a woman's recorded voice coming from one side of the booth or the other. "At the bakery, workers will be baking bread," says the voice. Karly turns to her left and listens, her face intent. "On Tuesday morning, the people have going to work," says the voice. Karly turns her head away even before the statement is finished. The lights come on as graduate student Ruth Tincoff opens the door to the booth. She gives the child's curls a pat and says, "Nice work." Karly and her mother are taking part in an experiment at Johns Hopkins University in Baltimore, run by psycholinguist Peter Jusczyk, who has spent 25 years probing the linguistic skills of children who have not yet begun to talk. Like most toddlers her age, Karly can utter a few dozen words at most and can string together the occasional two-word sentence, like "More juice" and "Up, Mommy." Yet as Jusczyk and his colleagues have found, she can already recognize that a sentence like "the people have going to work" is ungrammatical. By 18 months of age, most toddlers have somehow learned the rule requiring that any verb ending in -ing must be preceded by the verb to be. "If you had asked me 10 years ago if kids this young could do this," says Jusczyk, "I would have said that's crazy." Linguists these days are reconsidering a lot of ideas they once considered crazy. Recent findings like Jusczyk's are reshaping the prevailing model of how children acquire language. The dominant theory, put forth by Noam Chomsky, has been that children cannot possibly learn the full rules and structure of languages strictly by imitating what they hear. Instead, nature gives children a head start, wiring them from birth with the ability to acquire their parents' native tongue by fitting what they hear into a pre-existing template for the basic structure shared by all languages. (Similarly, kittens are thought to be hard-wired to learn how to hunt.) Language, writes Massachusetts Institute of Technology linguist Steven Pinker, "is a distinct piece of the biological makeup of our brains." Chomsky, a prominent linguist at MIT, hypothesized in the 1950s that children are endowed from birth with "universal grammar," the fundamental rules that are common to all languages, and the ability to apply these rules to the raw material of the speech they hear--without awareness of their underlying logic. The average preschooler can't tell time, but he has already accumulated a vocabulary of thousands of words--plus (as Pinker writes in his book, The Language Instinct,) "a tacit knowledge of grammar more sophisticated than the thickest style manual." Within a few months of birth, children have already begun memorizing words without knowing their meaning. The question that has absorbed--and sometimes divided--linguists is whether children need a special language faculty to do this or instead can infer the abstract rules of grammar from the sentences they hear, using the same mental skills that allow them to recognize faces or master arithmetic. The debate over how much of language is already vested in a child at birth is far from settled, but new linguistic research already is transforming traditional views of how the human brain works and how language evolved. "This debate has completely changed the way we view the brain," says Elissa Newport, a psycholinguist at the University of Rochester in New York. Far from being an orderly, computerlike machine that methodically calculates step by step, the brain is now seen as working more like a beehive, its swarm of interconnected neurons sending signals back and forth at lightning speed. An infant's brain, it turns out, is capable of taking in enormous amounts of information and finding the regular patterns contained within it. Geneticists and linguists recently have begun to challenge the common-sense assumption that intelligence and language are inextricably linked, through research on a rare genetic disorder called Williams syndrome, which can seriously impair cognition while leaving language nearly intact (box). Increasingly sophisticated technologies such as magnetic resonance imaging are allowing researchers to watch the brain in action, revealing that language literally sculpts and reorganizes the connections within it as a child grows. The path leading to language begins even before birth, when a developing fetus is bathed in the muffled sound of its mother's voice in the womb. Newborn babies prefer their mothers' voices over those of their fathers or other women, and researchers recently have found that when very young babies hear a recording of their mothers' native language, they will suck more vigorously on a pacifier than when they hear a recording of another tongue. At first, infants respond only to the prosody--the cadence, rhythm, and pitch--of their mothers' speech, not the words. But soon enough they home in on the actual sounds that are typical of their parents' language. Every language uses a different assortment of sounds, called phonemes, which combine to make syllables. (In English, for example, the consonant sound "b" and the vowel sound "a" are both phonemes, which combine for the syllable ba, as in banana.) To an adult, simply perceiving, much less pronouncing, the phonemes of a foreign language can seem impossible. In English, the p of pat is "aspirated," or produced with a puff of air; the p of spot or tap is unaspirated. In English, the two p's are considered the same; therefore it is hard for English speakers to recognize that in many other languages the two p's are two different phonemes. Japanese speakers have trouble distinguishing between the "l" and "r" sounds of English, since in Japanese they don't count as separate sounds. Polyglot tots. Infants can perceive the entire range of phonemes, according to Janet Werker and Richard Tees, psychologists at the University of British Columbia in Canada. Werker and Tees found that the brains of 4-month-old babies respond to every phoneme uttered in languages as diverse as Hindi and Nthlakampx, a Northwest American Indian language containing numerous consonant combinations that can sound to a nonnative speaker like a drop of water hitting an empty bucket. By the time babies are 10 months to a year old, however, they have begun to focus on the distinctions among phonemes of their native language and to ignore the differences among foreign sounds. Children don't lose the ability to distinguish the sounds of a foreign language; they simply don't pay attention to them. This allows them to learn more quickly the syllables and words of their native tongue. An infant's next step is learning to fish out individual words from the nonstop stream of sound that makes up ordinary speech. Finding the boundaries between words is a daunting task, because people don't pause . . . between . . . words . . . when . . . they speak. Yet children begin to note word boundaries by the time they are 8 months old, even though they have no concept of what most words mean. Last year, Jusczyk and his colleagues reported results of an experiment in which they let 8-month-old babies listen at home to recorded stories filled with unusual words, like hornbill and python. Two weeks later, the researchers tested the babies with two lists of words, one composed of words they had already heard in the stories, the other of new unusual words that weren't in the stories. The infants listened, on average, to the familiar list for a second longer than to the list of novel words. The cadence of language is a baby's first clue to word boundaries. In most English words, the first syllable is accented. This is especially noticeable in words known in poetry as trochees--two-syllable words stressed on the first syllable--which parents repeat to young children (BA-by, DOG-gie, MOM-my). At 6 months, American babies pay equal amounts of attention to words with different stress patterns, like gi-RAFFE or TI-ger. By 9 months, however, they have heard enough of the typical first-syllable-stress pattern of English to prefer listening to trochees, a predilection that will show up later, when they start uttering their first words and mispronouncing giraffe as raff and banana as nana. At 30 months, children can easily repeat the phrase "TOM-my KISS-ed the MON-key," because it preserves the typical English pattern, but they will leave out the the when asked to repeat "Tommy patted the monkey." Researchers are now testing whether French babies prefer words with a second-syllable stress--words like be-RET or ma-MAN. Decoding patterns. Most adults could not imagine making speedy progress toward memorizing words in a foreign language just by listening to somebody talk on the telephone. That is basically what 8-month-old babies can do, according to a provocative study published in 1996 by the University of Rochester's Newport and her colleagues, Jenny Saffran and Richard Aslin. They reported that babies can remember words by listening for patterns of syllables that occur together with statistical regularity. The researchers created a miniature artificial language, which consisted of a handful of three-syllable nonsense words constructed from 11 different syllables. The babies heard a computer-generated voice repeating these words in random order in a monotone for two minutes. What they heard went something like "bidakupadotigolabubidaku." Bidaku, in this case, is a word. With no cadence or pauses, the only way the babies could learn individual words was by remembering how often certain syllables were uttered together. When the researchers tested the babies a few minutes later, they found that the infants recognized pairs of syllables that had occurred together consistently on the recording, such as bida. They did not recognize a pair like kupa, which was a rarer combination that crossed the boundaries of two words. In the past, psychologists never imagined that young infants had the mental capacity to make these sorts of inferences. "We were pretty surprised we could get this result with babies, and with only brief exposure," says Newport. "Real language, of course, is much more complicated, but the exposure is vast." Learning words is one thing; learning the abstract rules of grammar is another. When Noam Chomsky first voiced his idea that language is hard-wired in the brain, he didn't have the benefit of the current revolution in cognitive science, which has begun to pry open the human mind with sophisticated psychological experiments and new computer models. Until recently, linguists could only parse languages and marvel at how quickly children master their abstract rules, which give every human being who can speak (or sign) the power to express an infinite number of ideas from a finite number of words. There also are a finite number of ways that languages construct sentences. As Chomsky once put it, from a Martian's-eye view, everybody on Earth speaks a single tongue that has thousands of mutually unintelligible dialects. For instance, all people make sentences from noun phrases, like "The quick brown fox," and verb phrases, like "jumped over the fence." And virtually all of the world's 6,000 or so languages allow phrases to be moved around in a sentence to form questions, relative clauses, and passive constructions. Statistical wizards. Chomsky posited that children were born knowing these and a handful of other basic laws of language and that they learn their parents' native tongue with the help of a "language acquisition device," preprogrammed circuits in the brain. Findings like Newport's are suggesting to some researchers that perhaps children can use statistical regularities to extract not only individual words from what they hear but also the rules for cobbling words together into sentences. This idea is shared by computational linguists, who have designed computer models called artificial neural networks that are very simplified versions of the brain and that can "learn" some aspects of language. Artificial neural networks mimic the way that nerve cells, or neurons, inside a brain are hooked up. The result is a device that shares some basic properties with the brain and that can accomplish some linguistic feats that real children perform. For example, a neural network can make general categories out of a jumble of words coming in, just as a child learns that certain kinds of words refer to objects while others refer to actions. Nobody has to teach kids that words like dog and telephone are nouns, while go and jump are verbs; the way they use such words in sentences demonstrates that they know the difference. Neural networks also can learn some aspects of the meaning of words, and they can infer some rules of syntax, or word order. Therefore, a computer that was fed English sentences would be able to produce a phrase like "Johnny ate fish," rather than "Johnny fish ate," which is correct in Japanese. These computer models even make some of the same mistakes that real children do, says Mark Seidenberg, a computational linguist at the University of Southern California. A neural network designed by a student of Seidenberg's to learn to conjugate verbs sometimes issued sentences like "He jumped me the ball," which any parent will recognize as the kind of error that could have come from the mouths of babes. But neural networks have yet to come close to the computation power of a toddler. Ninety percent of the sentences uttered by the average 3-year-old are grammatically correct. The mistakes they do make are rarely random but rather the result of following the rules of grammar with excessive zeal. There is no logical reason for being able to say "I batted the ball" but not "I holded the rabbit," except that about 180 of the most commonly used English verbs are conjugated irregularly. Yet for all of grammar's seeming illogic, toddlers' brains may be able to spot clues in the sentences they hear that help them learn grammatical rules, just as they use statistical regularities to find word boundaries. One such clue is the little bits of language called grammatical morphemes, which among other things tell a listener whether a word is being used as noun or as a verb. The, for instance, signals that a noun will soon follow, while the suffix ion also identifies a word as a noun, as in vibration. Psycholinguist LouAnn Gerken of the University of Arizona recently reported that toddlers know what grammatical morphemes signify before they actually use them. She tested this by asking 2-year-olds a series of questions in which the grammatical morphemes were replaced with other words. When asked to "Find the dog for me," for example, 85 percent of children in her study could point to the right animal in a picture. But when the question was "Find was dog for me," they pointed to the dog 55 percent of the time. "Find gub dog for me," and it dropped to 40 percent. Fast mapping. Children may be noticing grammatical morphemes when they are as young as 10 months and have just begun making connections between words and their definitions. Gerken recently found that infants' brain waves change when they are listening to stories in which grammatical morphemes are replaced with other words, suggesting they begin picking up grammar even before they know what sentences mean. Such linguistic leaps come as a baby's brain is humming with activity. Within the first few months of life, a baby's neurons will forge 1,000 trillion connections, an increase of 20-fold from birth. Neurobiologists once assumed that the wiring in a baby's brain was set at birth. After that, the brain, like legs and noses, just grew bigger. That view has been demolished, says Anne Fernald, a psycholinguist at Stanford University, "now that we can eavesdrop on the brain." Images made using the brain-scanning technique positron emission tomography have revealed, for instance, that when a baby is 8 or 9 months old, the part of the brain that stores and indexes many kinds of memory becomes fully functional. This is precisely when babies appear to be able to attach meaning to words. Other leaps in a child's linguistic prowess also coincide with remarkable changes in the brain. For instance, an adult listener can recognize eleph as elephant within about 400 milliseconds, an ability called "fast mapping" that demands that the brain process speech sounds with phenomenal speed. "To understand strings of words, you have to identify individual words rapidly," says Fernald. She and her colleagues have found that around 15 months of age, a child needs more than a second to recognize even a familiar word, like baby. At 18 months, the child can get the picture slightly before the word is ending. At 24 months, she knows the word in a mere 600 milliseconds, as soon as the syllable bay has been uttered. Fast mapping takes off at the same moment as a dramatic reorganization of the child's brain, in which language-related operations, particularly grammar, shift from both sides of the brain into the left hemisphere. Most adult brains are lopsided when it comes to language, processing grammar almost entirely in the left temporal lobe, just over the left ear. Infants and toddlers, however, treat language in both hemispheres, according to Debra Mills, at the University of California--San Diego, and Helen Neville, at the University of Oregon. Mills and Neville stuck electrodes to toddlers' heads to find that processing of words that serve special grammatical functions, such as prepositions, conjunctions, and articles, begins to shift into the left side around the end of the third year. From then on, the two hemispheres assume different job descriptions. The right temporal lobe continues to perform spatial tasks, such as following the trajectory of a baseball and predicting where it will land. It also pays attention to the emotional information contained in the cadence and pitch of speech. Both hemispheres know the meanings of many words, but the left temporal lobe holds the key to grammar. This division is maintained even when the language is signed, not spoken. Ursula Bellugi and Edward Klima, a wife and husband team at the Salk Institute for Biological Studies in La Jolla, Calif., recently demonstrated this fact by studying deaf people who were lifelong signers of American Sign Language and who also had suffered a stroke in specific areas of the brain. The researchers found, predictably, that signers with damage to the right hemisphere had great difficulty with tasks involving spatial perception, such as copying a drawing of a geometric pattern. What was surprising was that right hemisphere damage did not hinder their fluency in ASL, which relies on movements of the hands and body in space. It was signers with damage to the left hemisphere who found they could no longer express themselves in ASL or understand it. Some had trouble producing the specific facial expressions that convey grammatical information in ASL. It is not just speech that's being processed in the left hemisphere, says MIT's Pinker, "or movements of the mouth, but abstract language." Nobody knows why the left hemisphere got the job of processing language, but linguists are beginning to surmise that languages are constructed the way they are in part because the human brain is not infinitely capable of all kinds of computation. "We are starting to see how the universals among languages could arise out of constraints on how the brain computes and how children learn," says Johns Hopkins linguist Paul Smolensky. For instance, the vast majority of the world's languages favor syllables that end in a vowel, though English is an exception. (Think of a native Italian speaking English and adding vowels where there are none.) That's because it is easier for the auditory centers of the brain to perceive differences between consonants when they come before a vowel than when they come after. Human brains can easily recognize pad, bad, and dad as three different words; it is much harder to distinguish tab, tap, and tad. As languages around the world were evolving, they were pulled along paths that minimize ambiguity among sounds. Birth of a language. Linguists have never had the chance to study a spoken language as it is being constructed, but they have been given the opportunity to observe a new sign language in the making in Nicaragua. When the Sandinistas came to power in 1979, they established schools where deaf people came together for the first time. Many of the pupils had never met another deaf person, and their only means of communication at first was the expressive but largely unstructured pantomime each had invented at home with their hearing families. Soon the pupils began to pool their makeshift gestures into a system that is similar to spoken pidgin, the form of communication that springs up in places where people speaking mutually unintelligible tongues come together. The next generation of deaf Nicaraguan children, says Judy Kegl, a psycholinguist at Rutgers University, in Newark, N.J., has done it one better, transforming the pidgin sign into a full-blown language complete with regular grammar. The birth of Nicaraguan sign, many linguists believe, mirrors the evolution of all languages. Without conscious effort, deaf Nicaraguan children have created a sign that is now fluid and compact, and which contains standardized rules that allow them to express abstract ideas without circumlocutions. It can indicate past and future, denote whether an action was performed once or repeatedly, and show who did what to whom, allowing its users to joke, recite poetry, and tell their life stories. Linguists have a long road ahead of them before they can say exactly how a child goes from babbling to banter, or what the very first languages might have been like, or how the brain transforms vague thoughts into concrete words that sometimes fly out of our mouths before we can stop them. But already, some practical conclusions are falling out of the new research. For example, two recent studies show that the size of toddlers' vocabularies depends in large measure on how much their mothers talk to them. At 20 months, according to a study by Janellen Huttenlocher of the University of Chicago, the children of talkative mothers had 131 more words in their vocabularies than children whose mothers were more taciturn. By age 2, the gap had widened to 295 words. In other words, children need input and they need it early, says Newport. Parking a toddler in front of the television won't improve vocabulary, probably because kids need real human interaction to attach meaning to words. Hearing more than one language in infancy makes it easier for a child to hear the distinctions between phonemes of more than one language later on. Newport and other linguists have discovered in recent years that the window of opportunity for acquiring language begins to close around age 6, and the gap narrows with each additional candle on the birthday cake. Children who do not learn a language by puberty will never be fluent in any tongue. That means that profoundly deaf children should be exposed to sign language as early as possible, says Newport. If their parents are hearing, they should learn to sign. And schools might rethink the practice of waiting to teach foreign languages until kids are nearly grown and the window on native command of a second language is almost shut. Linguists don't yet know how much of grammar children are able to absorb simply by listening. And they have only begun to parse the genes or accidents of brain wiring that might give rise, as Pinker puts it, to the poet, the raconteur, or an Alexander Haig, a Mrs. Malaprop. What is certain is that language is one of the great wonders of the natural world, and linguists are still being astonished by its complexity and its power to shape the brain. Human beings, says Kegl, "show an incredible enthusiasm for discourse." Maybe what is most innate about language is the passion to communicate.
Kristen Aerts is only 9 years old, but she can work a room like a seasoned pol. She marches into the lab of cognitive neuroscientist Ursula Bellugi, at the Salk Institute for Biological Studies in La Jolla, Calif., and greets her with a cheery, "Good morning Dr. Bellugi. How are you today?" The youngster smiles at a visitor and says, "My name is Kristen. What's yours?" She looks people in the eye when she speaks and asks questions--social skills that many adults never seem to master, much less a third grader. Yet for all her poise, Kristen has an IQ of about 79. She cannot write her address; she has trouble tying her shoes, drawing a simple picture of a bicycle, and subtracting 2 from 4; and she may never be able to live independently. Kristen has Williams syndrome, a rare genetic disorder that affects both body and brain, giving those who have it a strange and incongruous jumble of deficits and strengths. They have diminished cognitive capacities and heart problems, and age prematurely, yet they show outgoing personalities and a flair for language. "What makes Williams syndrome so fascinating," says Bellugi, "is it shows that the domains of cognition and language are quite separate." Genetic gap. Williams syndrome, which was first described in 1961, results when a group of genes on one copy of chromosome 7 is deleted during embryonic development. Most people with Williams resemble each other more than they do their families, with wide-set hazel eyes, upturned noses, and wide mouths. They also share a peculiar set of mental impairments. Most stumble over the simplest spatial tasks, such as putting together a puzzle, and many cannot read or write beyond the level of a first grader. In spite of these deficits, Bellugi has found that children with the disorder are not merely competent at language but extraordinary. Ask normal kids to name as many animals as possible in 60 seconds, and a string of barnyard and pet- store examples will tumble out. Ask children with Williams, and you'll get a menagerie of rare creatures, such as ibex, newt, yak, and weasel. People with Williams have the gift of gab, telling elaborate stories with unabashed verve and incorporating audience teasers such as, "Gadzooks!" and "Lo and behold!" This unlikely suite of skills and inadequacies initially led Bellugi to surmise that Williams might damage the right hemisphere of the brain, where spatial tasks are processed, while leaving language in the left hemisphere intact. That has not turned out to be true. People with Williams excel at recognizing faces, a job that enlists the visual and spatial-processing skills of the right hemisphere. Using functional brain imaging, a technique that shows the brain in action, Bellugi has found that both hemispheres of the brains of people with Williams are shouldering the tasks of processing language. Bellugi and other researchers are now trying to link the outward characteristics of people with Williams to the genes they are missing and to changes in brain tissue. They have begun concentrating on the neocerebellum, a part of the brain that is enlarged in people with Williams and that may hold clues to their engaging personalities and to the evolution of language. The neocerebellum is among the brain's newest parts, appearing in human ancestors about the same time as the enlargement of the frontal cortex, the place where researchers believe rational thoughts are formulated. The neocerebellum is significantly smaller in people with autism, who are generally antisocial and poor at language, the reverse of people with Williams. This part of the brain helps make semantic connections between words, such as sit and chair, suggesting that it was needed for language to evolve.
|