Native languages influence the way people group non-language sounds into rhythms.
TRANSDISCIPLINARITY
Combats Pedantic Scholarship and cultural decolonization.
HUMAN SYNCRONY
Improving literacy through arts education and advocacy by
providing collaborative and interdisciplinary resources for
understanding the universal laws of perception, underlying the
rhythms of both speech and music; world culture and our national
culture. Honoring the Unknown Culture Maker while bringing the
arts to life
on the Web, for all ages.
The sounds of our native languages affect how we hear music and other non-language sounds.
A team of American and Japanese researchers has found evidence that native languages influence the way people group non-language sounds into rhythms.
Exposure to certain patterns of speech can influence one's perceptions of musical rhythms.
People in different cultures perceive different rhythms in identical sequences of sound, according to Drs. John R. Iversen and Aniruddh D. Patel of The Neuroscience Institute in San Diego and Dr. Kengo Ohgushi of the Kyoto City University of Arts in Kyoto, Japan. This provides evidence that exposure to certain patterns of speech can influence one's perceptions of musical rhythms. In future work, they believe they may even be able to predict how people will hear rhythms based on the structures of their own languages.
Universal laws of perception, underlying the rhythms of both speech and music.
Researchers have traditionally tested how individuals group rhythms by playing simple sequences of tones. For example, listeners are presented with tones that alternate in loudness (...loud-soft-loud-soft...) or duration (...long-short-long-short...) and are asked to indicate their perceived grouping. Two principles established a century ago, and confirmed in numerous studies since, are widely accepted: a louder sound tends to mark the beginning of a group, and a lengthened sound tends to mark the end of a group. These principles have come to be viewed as universal laws of perception, underlying the rhythms of both speech and music. However, the cross-cultural data have come from a limited range of cultures, such as American, Dutch and French.
This new study suggests one of those so-called "universal" principles, perceiving a longer sound at the end of a group, may be merely a byproduct of English and other Western languages. In the experiments Iversen, Patel and Ohgushi performed, native speakers of Japanese and native speakers of American English agreed with the principle that they heard repeating "loud-soft" groups. However, the listeners showed a sharp difference when it came to the duration principle. English-speaking listeners most often grouped perceived alternating short and long tones as "short-long." Japanese-speaking listeners, albeit with more variability, were more likely to perceive the tones as "long-short." Since this finding was surprising and contradicted a common belief of perception, the researchers replicated and confirmed it with listeners from different parts of Japan.
Understanding how musical rhythms begin in the two cultures
To uncover why these differences exist, one clue may come from understanding how musical rhythms begin in the two cultures. For example, if most phrases in American music start with a short-long pattern, and most phrases in Japanese music start with a long-short pattern , then listeners might learn to use these patterns as cues for how to group them. To test this idea, the researchers examined phrases in American and Japanese children's songs. They examined 50 songs per culture, and for each beginning phrase they computed the duration ratio of the first note to the second note and counted how often phrases started with a short-long pattern versus other possible patterns such as long-short, or equal duration. They found American songs show no bias to start phrases with a short-long pattern. But Japanese songs show a bias to start phrases with a long-short pattern, consistent with their perceptual findings.
One basic difference between English and Japanese is word order. In English, short grammatical, or "function," words such as "the," "a," and "to," come at the beginning of phrases and combine with longer meaningful, or "content," words such as a nouns or verbs. Function words are typically reduced in speech, having short duration and low stress. This creates frequent linguistic chunks that start with a short element and end with a long one, such as "to eat," and "a big desk." This fact about English has long been exploited by poets in creating the English language's most common verse form, iambic pentameter.
Japanese, in contrast, places function words at the ends of phrases. Common function words in Japanese include "case markers," or short sounds which can indicate whether a noun is a subject, direct object, indirect object, etc. For example, in the sentence "John-san-ga Mari-san-ni hon-wo agemashita," ("John gave a book to Mari") the suffixes "ga," "ni" and "wo" are case markers indicating that John is the subject, Mari is the indirect object and "hon" (book) is the direct object. Placing function words at the ends of phrases creates frequent chunks that start with a long element and end with a short one, which is just the opposite of the rhythm of short phrases in English.
Link between Language and Music
In addition to potentially uncovering a new link between language and music, the researchers' work demonstrates there is a need for cross-cultural research when it comes to testing general principles of auditory perception.
Babies seem to have a keen eye for speech : they can distinguish between different languages simply by reading your lips.
Speaking like a Chinese native is in the genes [
1
]
ENQUIRE in Chinese after the health of someone's mother and you
could well receive an answer about the well-being of their horse.
Subtle pronunciation differences in tonal languages such as Chinese
change the meaning of words, which is one reason why they are so
hard for speakers of
non-tonal languages like English
[perfect pictch]
to learn.
Babies of all backgrounds can grow up speaking any language, so
there is no such thing as "a gene for Chinese". There may, however,
be something in our genes that affects how easily we can learn
certain languages. So say Dan Dediu and Robert Ladd of the
University of Edinburgh, UK, who have discovered the first clear
correlation between language and genetic variation.
Using statistical analysis, the pair show that people in parts of
the world where non-tonal languages are spoken are more likely to
carry different, more recently evolved forms of two brain
development genes, ASPM and microcephalin, than people in tonal
regions (Proceedings of the National Academy of Sciences, DOI:
10.1073/pnas.0610848104).
"This is exciting because most genes and language features that vary
at the population level are either not correlated or have a
correlation that can be explained by geography or history," says
Ladd. In ASPM and microcephalin, neither geography nor history can
account for the correlation.
Since both genes have a function in brain development, Dediu and
Ladd propose that they may have subtle effects on the organisation
of the cerebral cortex, including the areas that process language.
Brain anatomy differs between
English speakers who are good at learning tonal languages
and those who find it harder, says Ladd, so now he wants to see
whether similar learning differences can be found in carriers of the
ASPM and microcephalin variant genes.
A remaining puzzle is the role of natural selection. The newer gene
variants that are common in non-tonal regions must have been
positively selected (New Scientist, 11 March 2006, p 30), but nobody
has been able to show how they might provide a selective advantage.
Dediu and Ladd don't think their proposed linguistic effect could be
the answer. "There is absolutely no reason to think that non-tonal
languages are in any way more fit for purpose than tonal languages,"
says Ladd.
Bernard Crespi of Simon Fraser University, Burnaby, in British
Columbia, Canada, has an explanation for the older genes, however.
"Tonal languages may have some similarities to 'motherese' [
baby talk
]," which apparently helps infants learn language, he says.
GENDER DIFFERENCES IN LANGUAGE
Boys' And Girls' Brains Are Different: Gender Differences In Language Appear Biological
ScienceDaily (Mar. 5, 2008)
-- Although researchers have long agreed that girls have superior
language abilities than boys, until now no one has clearly
provided a biological basis that may account for their
differences. For the first time -- and in unambiguous findings --
researchers from Northwestern University and the University of
Haifa show both that areas of the brain associated with language
work harder in girls than in boys during language tasks, and that
boys and girls rely on different parts of the brain when
performing these tasks.
"Our findings -- which suggest that language processing is more
sensory in boys and more abstract in girls -- could have major
implications for teaching children and even provide support for
advocates of single sex classrooms," said Douglas D. Burman,
research associate in Northwestern's Roxelyn and Richard Pepper
Department of Communication Sciences and Disorders.
Using functional magnetic resonance imaging (fMRI), the
researchers measured brain activity in 31 boys and in 31 girls
aged 9 to 15 asthey performed spelling and writing language tasks.
The tasks were delivered in two sensory modalities -- visual and
auditory. When visually presented, the children read certain words
without hearing them. Presented in an auditory mode, they heard
words aloud but did not see them.
Using a complex statistical model, the researchers accounted for
differences associated with age, gender, type of linguistic
judgment, performance accuracy and the method -- written or spoken
-- in which words were presented.
The researchers found that
girls still showed significantly greater activation in language
areas of the brain than boys.
The information in the tasks got through to girls' language areas
of the brain -- areas associated with abstract thinking through
language. And their performance accuracy correlated with the
degree of activation in some of these language areas.
To their astonishment, however, this was not at all the case for
boys.
In boys, accurate performance depended -- when reading words
-- on how hard visual areas of the brain worked. In hearing words,
boys' performance depended on how hard auditory areas of the brain
worked. If that pattern extends to language processing that occurs
in the classroom, it could inform teaching and testing methods.
Given boys' sensory approach, boys might be more effectively
evaluated on knowledge gained from lectures via oral tests and on
knowledge gained by reading via written tests. For girls, whose
language processing appears more abstract in approach, these
different testing methods would appear unnecessary.
"One possibility is that boys have some kind of bottleneck in
their sensory processes that can hold up visual or auditory
information and keep it from being fed into the language areas of
the brain," Burman said. This could result simply from girls
developing faster than boys, in which case the differences between
the sexes might
disappear by adulthood.
Or, an alternative explanation is that boys create visual and
auditory associations such that meanings associated with a word
are brought to mind simply from seeing or hearing the word.
While the second explanation puts males at a disadvantage in more
abstract language function, those kinds of sensory associations
may have provided an evolutionary advantage for primitive men
whose survival required them to quickly recognize
danger-associated sights and sounds.
If the pattern of females relying on an abstract language network
and of males relying on sensory areas of the brain extends into
adulthood -- a still unresolved question -- it could explain why
women often provide more context and abstract representation than
men.
Ask a woman for directions and you may hear something like: "Turn
left on Main Street, go one block past the drug store, and then
turn right, where there's a flower shop on one corner and a cafe
across the street." Such information-laden directions may be
helpful for women because all information is relevant to the
abstract concept of where to turn; however, men may require only
one cue and be distracted by additional information.
Burman is primary author of "Sex Differences in Neural Processing
of Language Among Children." Co-authored by James R. Booth
(Northwestern University) and Tali Bitan (University of Haifa),
the article will be published in the
Neuropsychologia Journal
.