A month before birth, fetuses can distinguish between someone speaking to them in English and in Japanese.
“Research suggests that human language development may start really early—a few days after birth,” says Utako Minai, associate professor of linguistics at the University of Kansas.
“Babies a few days old have been shown to be sensitive to the rhythmic differences between languages. Previous studies have demonstrated this by measuring changes in babies’ behavior; for example, by measuring whether babies change the rate of sucking on a pacifier when the speech changes from one language to a different language with different rhythmic properties.
“This early discrimination led us to wonder when children’s sensitivity to the rhythmic properties of language emerges, including whether it may, in fact, emerge before birth,” Minai says. “Fetuses can hear things, including speech, in the womb.”
There was already a study that suggested fetuses could discriminate between different types of language, based on rhythmic patterns, but the current work in the journal NeuroReport, used more accurate non-invasive technology called a magnetocardiogram (MCG).
“The previous study used ultrasound to see whether fetuses recognized changes in language by measuring changes in fetal heart rate,” Minai says. “The speech sounds that were presented to the fetus in the two different languages were spoken by two different people in that study.
Speech is “muffled, like the adults talking in a Peanuts cartoon, but the rhythm of the language should be preserved and available for the fetus to hear.”
“They found that the fetuses were sensitive to the change in speech sounds, but it was not clear if the fetuses were sensitive to the differences in language or the differences in speaker, so we wanted to control for that factor by having the speech sounds in the two languages spoken by the same person.”
Two dozen women, averaging roughly eight months pregnant, were examined using the MCG.
Fetal biomagnetometers fit over the maternal abdomen and detect tiny magnetic fields that surround electrical currents from the maternal and fetal bodies, including heartbeats, breathing, and other body movements.
“The biomagnetometer is more sensitive than ultrasound to the beat-to-beat changes in heart rate,” says Kathleen Gustafson, a research associate professor of neurology at the University of Kansas.
“Obviously, the heart doesn’t hear, so if the baby responds to the language change by altering heart rate, the response would be directed by the brain.”
Which is exactly what the study found.
“The fetal brain is developing rapidly and forming networks,” Gustafson says. “The intrauterine environment is a noisy place. The fetus is exposed to maternal gut sounds, her heartbeats and voice, as well as external sounds. Without exposure to sound, the auditory cortex wouldn’t get enough stimulation to develop properly. This study gives evidence that some of that development is linked to language.”
For the study, a bilingual speaker made two recordings, one each in English and Japanese, to be played in succession to the fetus. English and Japanese are argued to be rhythmically distinctive. English speech has a dynamic rhythmic structure resembling Morse code signals, while Japanese has a more regular-paced rhythmic structure.
Sure enough, the fetal heart rates changed when they heard the unfamiliar, rhythmically distinct language (Japanese) after having heard a passage of English speech, while their heart rates did not change when they were presented with a second passage of English instead of a passage in Japanese.
“The results came out nicely, with strong statistical support,” Minai says. “These results suggest that language development may indeed start in utero. Fetuses are tuning their ears to the language they are going to acquire even before they are born, based on the speech signals available to them in utero.
“Prenatal sensitivity to the rhythmic properties of language may provide children with one of the very first building blocks in acquiring language.”