Bilingualism and children’s use of paralinguistic cues to interpret emotion in speech
As I tried to find more research papers on how bi/multilingual children's brains differ from monolingual children's and bi/multilingual adults' I came across a research paper which investigate the difference in emotional intelligence between bilingual and monolingual children.
paralanguage = non-lexical component of communication by speech for example intonation, pitch and speed of speaking, hesitation noises, gesture and face expressions. (e.g. a happy event expressed in a sad voice)
Adults are usually used to take into consideration both content and paralanguage in their daily conversations ; that is how we understand sarcasm and irony. However it has been proven that children struggle to do so : six-year-olds have difficulty in recognizing sarcasm and irony. Children tend to pay more attention to content than paralanguage. A previous research, Morton and Trehub (2001), showed how when content and paralanguage matched, both children and adults could accurately identify happy and sad sentences however when the paralanguage and content didn't match children exclusively relied on lexical content over paralanguage to judge emotion.
In this research, W. Quin Yow and Ellen M. Markman tried to investigate if bilingual children also rely exclusively on the content of a conversation rather than the paralanguage. Their hypothesis was that bilingual children would be better able than monolingual children to use paralinguistic cues over content to interpret a speaker’s emotion. I agreed with the hypothesis as from personal experience I noticed how bi/multilinguals people, children included, tend to use intonation and facial expression to understand conversation. For example, when I first moved to England as an early teenager, I used to mostly rely on people's paralanguage to interpret conversation as usually I did not have the in depth knowledge to understand all the words. I would hypothesise that similarly children who might not have an in depth knowledge of their languages, tend to use facial expression and intonation to interpret a speaker. To fully understand this event this research has 2 studies : the first study focused on how bilinguals and monolinguals interpreted speech when content and paralanguage matched and in the second study the content and paralanguage of the sentences did not always match.
Study 1
The total number of participants in this study were 32. There were all four-year-olds from the same preschool and half of them,16, were monolinguals and the other half was bilingual. The control variables in this study were :
- age
- education ( all participants were from the same preschool in Palo Alto )
- at least 30% exposure to one of the two languages weekly since birth
- same socioeconomic status (SES)
- specific list of languages other than English
The method used in this study was similar to the research of Morton and Trehub (2001) as the same filtered speech stimuli was used. The children were presented with 32 sentences, half of these were happy and the other half were sad. 2 of these sentences were used a practice trails and the rest was used in the actual test. Children were randomly assigned to four different pre-determined randomized orders between sad and happy sentences with no more than 3 happy or sad sentences in a row. The children had two pre-designated buttons on the keyboard to record their responses. The response time, in ms, was collected as the time from which each speech stimulus has ended to the time a computer key has been pressed.
The children were told to listen carefully to the experimenter's friend, Marianne, and they were told that sometimes Marianne felt sad and other times happy. Also the children were told that " the speaker was being a bit funny so she sounded different from her normal voice". Children were asked to press one of the two pre-designated buttons after they heard each sentence: “happy” button if they thought the speaker was feeling happy or “sad” button if they thought the speaker was feeling sad. After the 2 practice trails, the 30 sentences were presented in 3 blocks of ten which each had 5 happy and 5 sad sentences. The participants were reminded often that they had to press the happy button if they thought she sounded happy and vice versa.
The main finding in this study is that monolingual and bilingual children are equally capable of identifying emotion based on paralinguistic cues when content and paralanguage matched. The evidence for this are :
- No significant difference between monolingual and bilingual children in terms of mean correct responses of out 30. Monolinguals' mean was 17.81 (4.39) and bilinguals' mean was 18.81 (5.60). The overall mean number of correct responses was 18.30
- No significant difference between monolingual and bilingual children in terms of average median reaction time. Monolinguals' average median was 1.98 s (1.19) and bilinguals' average median was 1.40 s (0.58). The overall average median time was 1.69 s. (median was used mean is not sensitive to occasional extreme values).
Study 2
The same number of participants were in this study, half of them monolingual and the other half bilingual. They were also four-year-olds from the same preschool as Study 1 in Palo Alto. These participants were not part of Study 1. The same control variables applied here as Study 1.
For the method, the speech stimuli were presented on a Macintosh computer. The same 40 spoken utterances were used as those in Morton and Trehub (2001). There were 10 sentences describing happy situations (e.g. “my mommy give me a treat”) and 10 describing sad situations (e.g. “my dog ran away from home”). Each sentence was recorded twice, once with happy paralanguage and once with sad paralanguage. Also four utterances with neutral content from Morton et al. (2003) were included as practice trials (e.g. “I live in Mississauga”), two of which were recorded with happy paralanguage and two with sad paralanguage. Children were randomly assigned to eight different pre-determined randomized orders between happy vs sad sentences and condition (consistent vs. discrepant), with no more than 3 happy or sad sentences in a row and no more than 3 consistent and discrepant sentences in a row. Response time, in milliseconds, was collected as the time from which each speech stimulus has ended to the time a computer key has been pressed, the same as Study 1. The children were told the same instructions as Study 1 expect that the children were not told that the speaker would sound different from her normal voice. Also the children had 4 practice trials with neutral sentences. Forty sentences were then presented in four blocks of ten. Each block consisted of five happy and five sad sentences. They were reminded often that they had to press the happy button if they thought she sounded happy and vice versa.
The same method, ANOVA, was used to measure the participants' responses. The main finding was that monolingual children relied on content over paralanguage when judging emotion whereas bilingual children showed an early emerging ability to use paralinguistic cues over content but they were not as able as adults to do this consistently. The evidence for this are :
- All children scored significantly higher in the consistent condition than in the discrepant condition so when content and paralanguage matched, all children identified happy and sad sentences equally well, but when content conflicted with paralanguage, monolingual children relied on content while bilingual children were more willing to use paralanguage to judge emotion
To show the different results accurately the children were divided into 3 different groups based on their correct answers. The 3 different groups were : Content Focus (scores of 0-6),, Mixed Focus (scores of 7–13), and Paralinguistic Focus (scores of 14–20).
The table they created was this :
This shows how only 25% of the monolingual children were part of the mixed and paralinguistic focus groups whereas 81.3% of the bilingual children were successful at identifying the correct answer.
Conclusion
An explanation for the bilinguals' advanced emotional intelligence is their need of selective attention and control in their day-to-day life. This might explain how they ignore content and utilize tone of voice. Another explanation is that growing up bilingual provides a natural environment for children to learn about the changing communicative demands in a social context. They may be more attentive to various communicative cues to help them understand the communicative context and how they should respond, including which language to use with which speaker under what context. Although it was not the same level as an adult, the bilingual children in this study showed an increased awareness and sensitivity to emotions than monolinguals suggesting that parts of their brains develop quicker than children who only speak one language.
Comments
Post a Comment