The interference between language and music has become a matter of study since the formulation of Patel’s hypothesis (2003), that is the Shared Syntactic Integration Resource Hypothesis (SSIRH). According to this framework, the processing of the syntax of both language and music requires the same neural resources, located in the frontal areas of the brain, whereas the representations associated to musical syntax are distinct from those associated to linguistic syntax, and involve distinct neural resources. In the last decades, both behavioral and neuroimaging works tested whether there is actually an interaction between language and music. From a purely syntactic perspective, some authors (Fedorenko et al. 2009, Slevc et al. 2009, Fiveash and Pammer, 2014, Hoch et al. 2011, Koelsch et al. 2005, Steinbeis and Koelsch, 2008a) confirmed SSIRH’s predictions, while there is no general agreement on the results of the investigation of linguistic semantic processing interacting with simultaneously presented harmonic incongruities (see Besson et al. 1998, Bonnel et al. 2001, Poulin-Charronat et al. 2005). Until now, as far as we know, the relationship between pragmatic knowledge in language and musical grammar has not been tested, yet. In this thesis, I take up the following questions: Does implicit musical processing interfere with the computation of scalar implicatures? Is there any difference between musicians and non-musicians regarding the music/pragmatics potential interference? In providing an answer to my research questions, I will also test Relevance Theory predictions on the computation of scalar implicatures (implicatures are expected to be cognitively costly) by evaluating and assessing many previous studies in the field of experimental pragmatics. Study 1 is a statement evaluation task whose accuracy results show a worse performance of both groups (musicians and non-musicians) while processing scalar implicatures in the presence of music. Particularly, in RTs analysis, I found that non-musicians are slower compared to musicians when computing infelicitous sentences. My study generally confirms that pragmatically infelicitous sentences are more difficult to be computed than pragmatically felicitous sentences, according to the predictions made by Relevance Theory. As for the interaction between language and music a significant interaction of music in the infelicitous context has been found. However, I deepen the analysis by adding more musical conditions in the following study. Study 2, a sentence picture verification task, implements Study 1 because it tests the music/pragmatics interaction with respect to more musical conditions (no music condition, music in tune condition, out-of-key chord condition and loudness manipulation condition). Relevance Theory’s predictions, according to which pragmatically infelicitous sentences are more difficult to process than pragmatically felicitous sentences, are further confirmed. Moreover, though an interference between language and music has clearly emerged, the interference emerging in my study manifested itself independently of the nature of the relevant musical condition, and more specifically, independently of whether the interfering music was in tune or with a dissonant target chord, differently from what emerged from a variety of studies testing the interference with strictly syntactic processing. In these studies, manipulating the musical condition, i.e. making musical processing more difficult by means of a dissonant target chord, has the effect of subtracting resources to syntactic processing of linguistic stimuli. Thus, it is possible to claim that as far as scalar implicature computations are involved, language interferes with music only at a general cognitive level (i.e. at the level of the general cognitive burden presupposed by some complex dual task) and not because musical syntax and scalar implicature processing consume the very same neural resources in the brain. As for the differences between musicians and non-musicians, in Study 2 no differences have been found concerning the performance of the two groups with respect to the different musical conditions. Interestingly, however, non-musicians had a worse performance than musicians while processing the infelicitous sentences. Overall, the results show that the computation of scalar implicatures is more difficult in the pragmatically infelicitous context than in the pragmatically felicitous context, as predicted by the Relevance Theory approach. Moreover, music interferes with pragmatic processing of linguistic stimuli. This happens only at a general cognitive level, in accordance with the relative complexity of a dual task involving both linguistic and musical stimuli, while the data do not support the hypothesis that the musical and the pragmatic computation revolve around the same network of neural resources in the brain. This can be straightforwardly interpreted as an important class of evidence for the SSIRH. Regarding the differences between musicians and non-musicians, I found that non-musicians have a worse performance both in terms of RT (Study 1) and of accuracy (Study 2), in the pragmatically infelicitous condition, with respect to musicians. This can be due to an experimental artefact but it might also be related to the cognitive benefits of musical training on the execution of the complex set of computations required by processing infelicitous sentences containing scalar terms.

On the relationship between linguistic and musical processing. The case of scalar implicatures.

Menegazzo, Elena
2019-01-01

Abstract

The interference between language and music has become a matter of study since the formulation of Patel’s hypothesis (2003), that is the Shared Syntactic Integration Resource Hypothesis (SSIRH). According to this framework, the processing of the syntax of both language and music requires the same neural resources, located in the frontal areas of the brain, whereas the representations associated to musical syntax are distinct from those associated to linguistic syntax, and involve distinct neural resources. In the last decades, both behavioral and neuroimaging works tested whether there is actually an interaction between language and music. From a purely syntactic perspective, some authors (Fedorenko et al. 2009, Slevc et al. 2009, Fiveash and Pammer, 2014, Hoch et al. 2011, Koelsch et al. 2005, Steinbeis and Koelsch, 2008a) confirmed SSIRH’s predictions, while there is no general agreement on the results of the investigation of linguistic semantic processing interacting with simultaneously presented harmonic incongruities (see Besson et al. 1998, Bonnel et al. 2001, Poulin-Charronat et al. 2005). Until now, as far as we know, the relationship between pragmatic knowledge in language and musical grammar has not been tested, yet. In this thesis, I take up the following questions: Does implicit musical processing interfere with the computation of scalar implicatures? Is there any difference between musicians and non-musicians regarding the music/pragmatics potential interference? In providing an answer to my research questions, I will also test Relevance Theory predictions on the computation of scalar implicatures (implicatures are expected to be cognitively costly) by evaluating and assessing many previous studies in the field of experimental pragmatics. Study 1 is a statement evaluation task whose accuracy results show a worse performance of both groups (musicians and non-musicians) while processing scalar implicatures in the presence of music. Particularly, in RTs analysis, I found that non-musicians are slower compared to musicians when computing infelicitous sentences. My study generally confirms that pragmatically infelicitous sentences are more difficult to be computed than pragmatically felicitous sentences, according to the predictions made by Relevance Theory. As for the interaction between language and music a significant interaction of music in the infelicitous context has been found. However, I deepen the analysis by adding more musical conditions in the following study. Study 2, a sentence picture verification task, implements Study 1 because it tests the music/pragmatics interaction with respect to more musical conditions (no music condition, music in tune condition, out-of-key chord condition and loudness manipulation condition). Relevance Theory’s predictions, according to which pragmatically infelicitous sentences are more difficult to process than pragmatically felicitous sentences, are further confirmed. Moreover, though an interference between language and music has clearly emerged, the interference emerging in my study manifested itself independently of the nature of the relevant musical condition, and more specifically, independently of whether the interfering music was in tune or with a dissonant target chord, differently from what emerged from a variety of studies testing the interference with strictly syntactic processing. In these studies, manipulating the musical condition, i.e. making musical processing more difficult by means of a dissonant target chord, has the effect of subtracting resources to syntactic processing of linguistic stimuli. Thus, it is possible to claim that as far as scalar implicature computations are involved, language interferes with music only at a general cognitive level (i.e. at the level of the general cognitive burden presupposed by some complex dual task) and not because musical syntax and scalar implicature processing consume the very same neural resources in the brain. As for the differences between musicians and non-musicians, in Study 2 no differences have been found concerning the performance of the two groups with respect to the different musical conditions. Interestingly, however, non-musicians had a worse performance than musicians while processing the infelicitous sentences. Overall, the results show that the computation of scalar implicatures is more difficult in the pragmatically infelicitous context than in the pragmatically felicitous context, as predicted by the Relevance Theory approach. Moreover, music interferes with pragmatic processing of linguistic stimuli. This happens only at a general cognitive level, in accordance with the relative complexity of a dual task involving both linguistic and musical stimuli, while the data do not support the hypothesis that the musical and the pragmatic computation revolve around the same network of neural resources in the brain. This can be straightforwardly interpreted as an important class of evidence for the SSIRH. Regarding the differences between musicians and non-musicians, I found that non-musicians have a worse performance both in terms of RT (Study 1) and of accuracy (Study 2), in the pragmatically infelicitous condition, with respect to musicians. This can be due to an experimental artefact but it might also be related to the cognitive benefits of musical training on the execution of the complex set of computations required by processing infelicitous sentences containing scalar terms.
2019
Language, Music, Pragmatics, SSIRH, Shared Syntactic Integration Resource Hypothesis, Scalar Implicatures, Relevance Theory
File in questo prodotto:
File Dimensione Formato  
TESI_final.pdf

accesso aperto

Tipologia: Tesi di dottorato
Licenza: Creative commons
Dimensione 17.95 MB
Formato Adobe PDF
17.95 MB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11562/993936
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact