Skip to main content

ERP correlates of German Sign Language processing in deaf native signers

Abstract

Background

The present study investigated the neural correlates of sign language processing of Deaf people who had learned German Sign Language (Deutsche Gebärdensprache, DGS) from their Deaf parents as their first language. Correct and incorrect signed sentences were presented sign by sign on a computer screen. At the end of each sentence the participants had to judge whether or not the sentence was an appropriate DGS sentence. Two types of violations were introduced: (1) semantically incorrect sentences containing a selectional restriction violation (implausible object); (2) morphosyntactically incorrect sentences containing a verb that was incorrectly inflected (i.e., incorrect direction of movement). Event-related brain potentials (ERPs) were recorded from 74 scalp electrodes.

Results

Semantic violations (implausible signs) elicited an N400 effect followed by a positivity. Sentences with a morphosyntactic violation (verb agreement violation) elicited a negativity followed by a broad centro-parietal positivity.

Conclusions

ERP correlates of semantic and morphosyntactic aspects of DGS clearly differed from each other and showed a number of similarities with those observed in other signed and oral languages. These data suggest a similar functional organization of signed and oral languages despite the visual-spacial modality of sign language.

Background

Sign languages exhibit all characteristics of natural language systems: They have a complex compositional structure, in which signs (analogous to words) are combined to create higher level structures (such as sentences) [1] respectively, [2, 3]. However, sign languages remarkably differ from spoken languages with respect to the manner in which they express grammatical relations. Signs are articulated in space and are specified at locations around the signer. This process is called “spatial mapping”. A noun can be specified in the signing space in three different ways: a) by signing the noun and indexing it to a location in the signing space, e.g., the right side of the signer, b) by signing the noun directly at a specific location, or c) by signing the noun and locating it through the starting point of the following verb movement [1]. To express verb agreement, the verb movement and/or palm orientation starts at the location of the subject and ends at the location of the object. Cross-linguistically, sign languages exhibit a strong typological homogeneity in their agreement system [1, 4]. This might be due to the fact that all known sign languages use the space around the signer (including the signer). Verb agreement in all known sign languages makes use of these loci in space as well [5]. Despite the linguistic similarities between signed and spoken languages, the surface structure is radically different, such as to use visual-spatial contrasts (i.e., space) to mark grammatical relations in signed languages. Such differences in the surface structure between languages of different modalities offer a unique opportunity to investigate the neurobiology of human language. Native signers are often Deaf individuals born to Deaf parents. Thus, these individuals commonly acquire a sign language from their parents and siblings from an early age. It has been shown that the developmental milestones of natural signed language acquisition correspond to those in natural spoken languages [6, 7]. Moreover, lesion and neuroimaging studies on sign language users have suggested a considerable overlap in the neural organization for spoken and sign language processing in Deaf native signers (for a review see [8, 9]). As spoken languages, sign languages (like British Sign Language (BSL) and American Sign Language (ASL)) activate the left inferior frontal cortex. Some authors [10, 11] have pointed out that the higher activation of homologous right hemispheric structures in sign language processing compared to the processing of a spoken language might be related to the higher reliance of signed languages on spatial functions. However, in these studies ASL processing has been compared with written English. When investigating audio-visual spoken language with sign language no difference in right hemispheric recruitment were observed [12].

To date, relatively little is known about the neural representation of different linguistic domains in sign language processing. Neurolinguistic research of the past years has established reliable event-related potential (ERP) indicators for different aspects of oral language processing [1315]: Semantic processing (such as lexical expectancy) has commonly been associated with a centro-posterior negativity emerging after e.g., implausible words with a latency of about 400 ms (i.e. the N400 [1618]). Traditionally, the N400 effect has been assumed to reflect lexical semantic integration processes [19, 20].

In contrast, morphosyntactic violations in e.g., verb agreement are associated with a frontal negativity emerging with a latency of approximately 300 ms (the so called LAN [2124]). This negativity is commonly followed by a positive wave with an onset latency of at least 500 ms (the so called P600 [25] or “syntactic positive shift” (SPS) [13]). This biphasic pattern of a LAN and P600 has been observed for verb agreement violations in various languages [22, 24, 2629]. Traditionally, it has been proposed that the LAN reflects early (morpho-)syntactic processing [30, 31] and/or working memory functions related to complex processing operations [32]. By contrast, the P600 has been suggested to be associated with processes of syntactic and semantic reanalysis and integration [25, 3335]. However, some researchers have demonstrated that the N400 might be modulated by syntactic processing aspects (e.g., [36]) and the P600 might be modulated by semantic processing aspects under some specific conditions [37, 38]. We employed only types of violations for which previous research has demonstrated clearly distinct ERP patterns, that is, an N400 effect for the semantic manipulation and a LAN followed by a P600 for the syntactic manipulation.

In their pioneering studies on the neural correlates of sign language processing, Neville et al. [3, 39] compared ERPs to open vs. closed class language elements and reported similar ERP correlates for single signs of ASL and oral word processing. A recent ERP study by Capek et al. has, in addition, investigated the processing of continuously presented ASL sentences [40]. Deaf native users of ASL watched signed sentences that were correct or comprised either a semantic violation (implausible sign) or a morphosyntactic verb agreement error. The Deaf participants, who were all native signers of ASL, showed an N400 effect for semantically implausible signs similar to previously observed effects for corresponding violations in oral languages. By contrast, morphosyntactic verb agreement errors a left frontal negativity followed by a posterior positivity (reversed verb agreement violations with a movement from the object to the subject instead of visa versa) [26, 40]. Recently, Hosemann et al. published an N400 effect to unexpected vs. expected sentence final (either action or non-action) verbs of DGS [41].

While the neural correlates of language have been compared between a number of different oral languages, electrophysiological studies on sign languages investigating different linguistic domains to date have mostly concentrated on ASL. To identify the functional organization of sign language comprehension, distinct patterns of neural activation for different linguistic aspects – such as semantic and syntactic processes within the same participants – have to be demonstrated in more than one sign language. The present study employed naturally signed DGS with both semantic (implausible words) and morphosyntactic (verb agreement error, see below) violations. In sum, the present study aimed at determining whether semantic and morphosyntactic aspects of DGS can be dissociated within the same individuals.

Native signers watched continuous DGS sentences, which were either correct or incorrect. Incorrect sentences comprised either an implausible sign or a verb-agreement violation in the middle position of a sentence. We used a different type of verb agreement violation than Capek et al. that can be clearly classified as a morphosyntactic error by sign language linguistics [1]. Moreover, in contrast to Hosemann et al. [41], we introduced the violation in the sentence middle position, because the processing of sentence final words activates additional processes, e.g., related to integration [42]. The participants’ task was to indicate at the end of each sentence whether or not the sentence was correct. In contrast to the stimuli used by Capek et al., the native signer, who signed the sentences was instructed to minimize affective and paralinguistic facial expressions and body movements. This allowed us to minimize coarticulation and paralinguistic effects. We expected distinct ERP patterns for semantic and morphosyntactic violations: A centro-posterior N400 like effect was predicted for semantic and a frontally distributed negativity followed by a posterior positivity was predicted for morphosyntactic violations.

Results

Behavioral data

The analysis of the percentages of correct responses revealed that Deaf native signers (n = 11) correctly judged 96.71% (SE: 0.58%) of the correct sentences, 98.42% (SE: 0.78%) of semantically incorrect sentences and 96.45% (SE: 0.12%) of morphosyntactically anomalous sentences.

EEG data

Semantic condition

The ERPs of Deaf native signers for the critical verb of correct sentences and semantically incorrect sentences are displayed in Figure 1. Semantically incorrect sentences elicited a more negative going potential compared to the correct sentences. This observation was confirmed by a main effect of Condition (F(1,10) = 6.267; p = 0.031) for the time window of 550–750 ms. Furthermore, a significant interaction of Condition and Cluster (F(2.8,28.1) = 3.253; p = 0.039) was revealed indicating a bilateral fronto-central scalp distribution of the violation effect. The latter was significant at clusters L1, L2, L3, L5, L6, R1, R2, R3, R5, and R6 (p < 0.05).

Figure 1
figure 1

Mean ERPs semantic condition. Mean ERPs in the semantic condition for the Deaf native signers for all clusters. The dotted lines denote the ERP in the incorrect condition; the solid lines denote the ERP in the correct condition. The analyzed time epoch is marked with a grey box.

Syntactic condition

The ERP data for the critical verb for Deaf native signers for correct sentences and morphosyntactically incorrect sentences are shown in Figures 2 and 3. Morphosyntactically incorrect sentences elicited a negative potential (LAN) in the time epoch 400–600 ms and a positive wave (P600) in the time epoch 10001300 ms.

Figure 2
figure 2

Mean ERPs syntactic condition. Mean ERPs in the morphosyntactic condition for the Deaf native signers for all clusters. The dotted lines denote the ERP in the incorrect condition, the solid lines denote the ERP in the correct condition. The analyzed time epoch is marked with a grey box.

Figure 3
figure 3

Overview Topographic Distributions of the ERPs. Topographies of the N400 (550–750 ms, first row), LAN (400–600 ms, second row), and P600 (1000–1300 ms, third row). Blue denotes negative values and red denotes positive values in μV.

The ANOVA [43] revealed a three way interaction of Condition and Hemisphere by Cluster (F(2.7,26.6) = 3.991; eps = 0.443; p = 0.021) in the time window 400–600 ms indicating a typical left lateralized frontal distribution of the LAN. The negative difference between correct and incorrect sentences was significant at clusters L1, L2, L3 (p < 0.05).

A significant main effect of Condition (F(1, 10) = 42.346; p < 0.001) was observed in the second time epoch of 1000–1300 ms, confirming a larger positivity in the response to morphosyntactic violations compared to morphosyntactically correct signs. In addition, the interaction of Condition by Cluster was significant (F(1.6,16.1) = 13.110; eps = 0.268; p < 0.001), indicating the typical posterior distribution of the P600. The positive difference was significant at all clusters (p < 0.05).

Discussion

Semantic and morphosyntactic violations addressing two functionally different linguistic aspects within German Sign Language (DGS) elicited clearly distinct ERP patterns. Semantic violations (implausible signs) were followed by a negative ERP with a fronto-central scalp distribution (N400). By contrast, syntactic violations (verb agreement violations in DGS) elicited a frontal negativity (LAN) followed by a central positivity (P600).

The N400 observed in native signers had a more anterior distribution than observed in reading studies [16, 4446]. A more anterior distribution of the N400 has been reported for auditory stimuli as well [47, 48], i.e., a modality that resembles sign language in its temporal dynamics more than written language does.

By contrast, other sign language studies have found a more posteriorly distributed N400 effect for both single noun signs and implausible nouns in sign language sentences [3, 40]. This effect could be explained by the processing of different word classes. In contrast to ASL, DGS has a different word order: while ASL is a subject-verb-object language, DGS belongs to the subject-object-verb languages. Therefore, the semantic violation is only detectable at the verb that follows the implausible object. Hence, the critical sign in the present DGS study was the verb rather than the object, as it was the case in the study of Capek et al. [40].

Several studies have shown that processing words from different grammatical classes (nouns and object words vs. verbs and action words, open vs. closed class, nouns vs. verbs) seem to engage different neural networks [49, 50]. Thus, another account for the different distributions of the N400 effect in the present and in the study of Capek et al. [40] may be the use of critical words from different word classes. Indeed, in an accompanying experiment with written German sentences, we found a similar posterior topography of the N400 in the Deaf native signers as observed in hearing L1 and hearing L2 users of German [51]. In this study the critical word was the object.

In a recent ERP study on DGS, Hosemann et al. observed an N400 to action and non-action verbs but did not discuss the topography of the N400 effect for unexpected action vs. non-action words. From their figures it seems as if the N400 effect to the non-action verbs had a more posterior distribution than the N400 effect to the action verbs. Since we used action verbs in our study, this result would fit nicely with our data.

By contrast, morphosyntactic violations elicited a left lateralized negativity with an anterior distribution. The anterior negativity was followed by a broadly distributed positivity with a central maximum. Both effects were highly similar to what has been observed previously, following both a large number of different syntactic anomalies in aural-oral languages [21, 25] and verb agreement violations in ASL [40]. However, our effects emerged later than in these studies [40]. These latency differences can be explained by the different trigger positions for ERP timelocking. As mentioned in the methods sign languages have long transition phases between two consecutive signs. Since in the morphosyntactic condition the location change of the sign is more crucial than the target sign itself we set the trigger position to the first detectable change in location with respect to the preceeding sign. This early trigger position caused the late onset of the violation effects. Indeed, Hosemann et al. showed for their N400 effects that changing the trigger along the transition phase changes the timing of the N400 effect.

Particularly impressive in our data is the clear left lateralization of the anterior negativity (LAN effect) which has previously been reported as a response to verb-agreement violations both in spoken English and German [24, 26, 35]. Our results extend the findings of Capek et al. [40]. Remarkably, the verb agreement violation used in our study differed from the two used by in this study: As a verb agreement violation they used reversed movement from the object location to the subject location instead of visa versa or a movement from the correct subject location to a non-defined location in space. Capek et al. suggested that the larger right hemispheric distribution of the violation effect for the second compared to the first violation type was mainly due to the increased spatial mapping requirements for non-localized signs in space. In contrast, in our study we combined the use of an unspecified and a wrong location in one verb agreement violation: the verb moved from a neutral (unspecified) subject position to the deictic first person object location. Compared to Capek et al., we clearly introduced the referential positions of subject and object in the signing space before the verb sign making spatial mapping by verb movement unlikely. Though in our paradigm the verb movement starts at an unspecified location as well, the starting point of the verb movement does not allow for a localization of the subject, because it has already been located at a different place before. Thus, the verb movement was clearly grammatically incorrect. Hence, no additional mapping process was necessary. The error between referential positions and verb movement was as in typical syntactic parcing processes most likely to be automatically detected as a morphosyntactic mismatch. This is explaining why we found a clearly left lateralized anterior negativity, similar to that observed in the ASL reversed verb agreement condition of Capek et al.

Conclusions

Consistent with previous research on oral and signed languages, we provide evidence that semantic and syntactic aspects of DGS are distinct processes, i.e., processes mediated by different neural systemes.

Methods

Participants

Fifteen Deaf native signers (≥85db Hearing level (HL) in each ear except for one participant who had a decibel loss of ≥ 70 db HL in the left ear) participated in the experiment. Three participants had to be excluded from further analyses since they did not reach the criterion of at least 60% correct responses in all experimental conditions. Additionally, one participant was excluded because the EEG data set was contaminated by excessive artifacts. Of the analyzed sample (6 female, 5 male; mean age: 28 years, range: 2040 years), four participants had “mittlere Reife” (correspondent approximately to an O-level), seven had “Abitur” (A-level), and one had an university degree. The first three excluded participants had “mittlere Reife”, the fourth excluded participant did not report his highest degree of education.

None of the participants had any known neurological impairments and all of them had normal or corrected-to-normal vision. They gave written informed consent before their participation and received a monetary compensation. All of the native signers were right-handed according to self-report and the Edinburgh Handedness Inventory. The participants had learned DGS from birth from their Deaf parents.

The sign language proficiency of the participants was assessed by using a DGS comprehension test (Gebärdensprach-Sinnverständnis Test (GSV) of the ATBG (“Aachener Testverfahren zur Berufseignung von Gehörlosen”; English: “Aachen’s vocational testing for the deaf”). On average, the selected participants were 87% (SE 3.7, range: 60% to 100%) correct in the DGS comprehension test. The study had been approved by the ethical committee of the German Society of Psychology (Nr: BRBHF 07022008).

Material

A set of 300 experimental sentences was constructed by two Deaf native signers, one Deaf near-native signer of DGS, and one sign language linguist. The sentences were signed by a Deaf native signer of DGS, videotyped, digitized, and presented at the rate of natural signing. Written informed consent for the publication of images was obtained from the signer.

The stimulus set was evaluated by 12 congenitally and profoundly deaf individuals (mean age: 36 years, range: 2764 years; ≥ 85db HL in each ear) who were all native signers of DGS. Upon presentation, participants had to judge whether or not the sentence was an appropriate DGS sentence. Sentences with less than 80% agreement among the native signers were disregarded. The final stimulus set consisted of 46 sentences from which 138 sentences were derived: (a) 46 sentences were correct, (b) 46 sentences were morphosyntactically incorrect comprising a verb that was incorrectly inflected (incorrect direction of movement), and (c) 46 sentences were semantically incorrect comprising a selectional restriction violation. For example, sentence (1b) violates the person agreement rule between subject verb and object verb via the wrong movement from neutral space to the first person:

(1a) BOY POINTa GIRL POINTb aNEEDLEb REASON POINTb SLOW SWIM

“the boy needles the girl because she is slowly swimming”

(1b) * BOY POINTa GIRL POINTb cNEEDLE1 REASON POINTb SLOW SWIM

“*the boy needle the girl because she is slowly swimming”

Instead, sentence (1c) is an example of a selectional restriction violation (semantic violation), since the object-verb relation is not semantically plausible:

(1c) BOY POINTa COAT POINTb aNEEDLEb REASON POINTb SLOW SWIM

“*the boy needles the coat because it is slowly swimming”

All sentences were constructed in a comparable SOV structure up to the critical sign (Figure 4).

Figure 4
figure 4

Still image samples illustrating the two types of violations in the DGS sentences. Notations in lowercase indicate the location of the sign: a = right of the signer; b = left of the signer; c = in front of the signer; 1 = signer; POINT = pointing to a location to place the referent in the signing space; the errors are marked through dotted lines resp. bold prints 1a) correct sentence: the verb sign NEEDLE moves correctly from the subject location to the object location; 1b) morphosyntactically incorrect sentence: the verb sign NEEDLE moves incorrectly from an unspecified location in front of the signer to the signer indicating first person; 1c) semantically incorrect sentence: semantically implausible object sign COAT.

Since DGS is a subject-object-verb (SOV) language, the semantically violated sentences became implausible at the verb (e.g., NEEDLE in the example shown in 1c). Thus, the verb is the critical sign to which ERPs were averaged.

Sentences had a mean length of 10 signs (median: 9, range: 7 – 13 signs) and a mean duration of 10457 ms (median: 10440 ms, range: 5680 ms–14480 ms, SD 1596). Additionally, 74 different filler sentences were presented. Sixty filler sentences were correct, 14 sentences had different morphosyntactic and semantic violations on varying sentence positions.

The stimulus onset of each sign was defined by a Deaf native signer, a Deaf delayed signer, and a DGS interpreter. Sign languages have rather long transition phases between one sign and the next [52] and it is a matter of debate when exactly a sign starts. According to Liddell & Johnson a sign begins when the handshape is completed and the hand is hold in its correct first location (‘Movement-Hold-Model’; [53]). Note, however, that the timing of comprehending a sign varies depending a) of what signing parameters has to be changed and b) of which signing parameter is linguistically crucial. Therefore, in our paradigm we distinguished two time points which were used as trigger positions (event codes):

  1. 1.

    In sentences with semantic violations we timelocked the sign onset – according to the Movement-Hold-Model – when handshape and hold were completed: To judge the semantic value of the object the target sign has to be perceived entirely in order to judge its appropriateness.

  2. 2.

    In sentences with morphosyntactic violations the location change (note that in sign language, syntax is expressed in space) of the sign is more crucial than the target sign itself: while moving the hand to the location of the beginning of the next sign the handshape changes and the morphosyntactic violation (incorrect location) is most likely recognized. Therefore, for morphosyntactically violated sentences the trigger position was set to the first handshape change that could be detected towards the target sign or – if earlier – the change of the lip movement (sign onset code II).

Procedure

The experiment comprised two sessions that were run mostly within one day. In the first session, the participants completed a language history questionnaire and a subtest of the ATBG (“Aachener Testverfahren zur Berufseignung von Gehörlosen”; English: “Aachen’s vocational testing for the deaf”). The test comprises a number of different modules to test aspects of memory, attention, spatial imagery, problem solving, general knowledge, arithmetic, and language. We only employed the subtest GSV (“Gebärdensprach-Verständnis-Test”; English: “Sign Language comprehension test”).

The experimental session consisted of 212 trials and was divided into five blocks with short breaks of a duration defined by the participants. The experiment lasted for about 90 minutes. Prior to the experimental blocks, 13 practice sentences were presented (which were not used in the analysis). Instructions were given in DGS: Since signers are familiar with a wide range of variations in DGS within the German signing community, they are extremely tolerant for language variation. For this reason participants were told to only accept “very well-formed” sentences as “correct”.

Participants were seated in a comfortable chair in front of a LCD monitor. Stimuli were presented on this monitor with a vertical visual angle of 13.12° and a horizontal visual angle of 16.48°. The size of the presented video footage was chosen to be readily identifiable.

Please note that the visual angels refer to the complete size of the shown footage. Thus, the visual angles within which the relevant signing was presented were smaller. In addition, during sign language comprehension, signers fixate primarily on the signers face (see results from eye tracking studies e.g., [54, 55]).

The different trial types were presented in a random order, holding the first picture of the video/signed sentence for 1000 ms with the signer in initial position to fixate the participant’s eyes on the screen. Six hundred ms after the end of the sentence a happy and a sad smiley appeared on the screen and participants were prompted to decide whether or not the sentence had been correct by pressing one of two buttons with their left and right index fingers (which hand was used to indicate correct and incorrect sentences, respectively, was randomized across participants). To start the next trial, the participants had to press one of the response buttons. In the second session, participants’ processing of written German sentences was examined (see [51], [56]).

ERP recording and data analysis

The electroencephalogram (EEG) and the electro-oculogram (EOG) were recorded using Ag/AgCl electrodes. Seventy-four electrodes were mounted according to the international 10/10 system into an elastic cap (Easy Cap; FMS, Herrsching-Breitbrunn, Germany) (see Figure 2). The vertical EOG (VEOG) was recorded with an electrode below both eyes against the right earlobe reference. The horizontal eye movements were monitored using electrodes F9 and F10 (bipolar recording defined offline). An averaged right/left earlobe reference was calculated offline. Electrode impedance was kept below 5 kΩ. The electrode signals were amplified using 3 BrainAmp DC amplifiers (Brain Products GmbH, Gilching, Germany) and digitally stored using the BrainVision Recorder software (Brain Products GmbH, Gilching, Germany). The analog EEG signal was sampled at 5000 Hz, filtered online with a bandpass of 0.1 to 250 Hz and then downsampled online to 500 Hz to be stored on a disk. The signal was low pass filtered offline with a high cut-off at 40Hz, 12 dB/oct.

Since language related ERPs have a rather broad topography, four adjacent electrodes were pooled, resulting in 7 electrode clusters for each hemisphere (see Figure 5).

Figure 5
figure 5

Schematic illustration of the electrodic clusters. The 14 clusters, 7 on each hemisphere, were used for the statistical analyses and are marked by connecting lines.

The behavioural data (percentage of correct judgements) were analyzed with a repeated measurements ANOVA with the within participant factor Condition (correct, semantically incorrect, and morphosyntactically incorrect).

Trials with ocular artifacts (with an individually adjusted criterion of a maximum peak to peak amplitude between 80–120 μV within the time epoch of −100–1500 ms), artifacts from muscle movements, alpha waves or drifts,which had an individually adjusted criterion up to 150 μV within the time epoch of −100–1500 ms for each participant, were identified and rejected offline.

The remaining segments were baseline corrected with respect to a 100 ms period preceding the onset of the critical word. Separate averages were calculated for the four conditions (1a) correct (sign onset code I), (2) semantically incorrect (sign onset code I), (1b) correct (sign onset code II), (3) morphosyntactically incorrect (sign onset code II) for the time segment starting 100 ms before and ending 1500 ms after the critical words.

Based on results from running t-tests and a visual inspection of the data, we ran analyses on the mean voltage of the following time epochs: 550750 ms (N400) for semantic violations and 400600 ms (LAN) and 10001300 ms (P600) for morphosyntactically violated sentences.

Time epochs were separately analyzed with an ANOVA comprising the repeated measurement factors Condition (correct vs. incorrect), Hemisphere (left vs. right), and Cluster (17). Sums of squares of Type II were calculated. To compensate for violations of the assumption of sphericity in multi-channel electroencephalographic data, the Huynh and Feldt correction was applied. Corrected degrees of freedom and corrected p-values, as well as the Huynh and Feldt epsilons (eps) are reported for the F-tests in the result section. Statistically significant effects without the factor Condition are not reported. The difference of the incorrect and the correct condition was tested with one tailed t-tests at each cluster. To correct for unequal variances, the degrees of freedom of the t-tests were corrected using the Welch algorithm [57]. The open source statistical programming language “R” was used for statistical analyses.

Regarding the participants, only trials followed by a correct response were included in the analysis. If a participant made more than 40% mistakes in at least one condition, he or she was excluded from the analysis. As described in the section participants, three Deaf native signers were excluded due to low performance.

References

  1. Sandler W, Lillo-Martin D: Sign language and linguistic universals. 2006, Cambridge, UK: Cambridge University Press

    Book  Google Scholar 

  2. Emmorey K: Language, cognition, and the brain: Insights from sign language research. 2002, Mahwah, NJ: Lawrence Erlbaum and Associates

    Google Scholar 

  3. Neville HJ, Coffey SA, Lawson DS, Fischer A, Emmorey K, Bellugi U: Neural systems mediating american sign language: effects of sensory experience and age of acquisition. Brain Lang. 1997, 57 (3): 285-308. 10.1006/brln.1997.1739.

    Article  CAS  PubMed  Google Scholar 

  4. Newport EL, Supalla T: Sign language research at the millennium. The Signs of Language Revisited: An Anthology in Honor of Ursula Bellugi and Edward Klima. Edited by: Emmorey K, Lane H. 2000, Mahwah NJ: Lawrence Erlbaum Associates, 103-114.

    Google Scholar 

  5. Rathmann C, Mathur G: Is verb agreement different cross-modally?. Modality and structure in signed and spoken languages. Edited by: Meier R, Cormier K, Quinto D. 2002, Cambridge: Cambridge University Press, 370-404.

    Google Scholar 

  6. Crain S, Lillo-Martin D: An introduction to linguistic theory and language acquisition. 1999, Malden, Mass: Blackwell

    Google Scholar 

  7. Hänel B: Der Erwerb der Deutschen Gebärdensprache als Erstsprache: Die frühkindliche Sprachentwicklung von Subjekt- und Objektverbkongruenz in DGS. 2005, Tübingen: Narr

    Google Scholar 

  8. Campbell R, MacSweeney M, Waters D: Sign language and the brain: a review. J Deaf Stud Deaf Educ. 2008, 13 (1): 3-20.

    Article  PubMed  Google Scholar 

  9. MacSweeney M, Capek CM, Campbell R, Woll B: The signing brain: the neurobiology of sign language. Trends Cogn Sci. 2008, 12 (11): 432-440. 10.1016/j.tics.2008.07.010.

    Article  PubMed  Google Scholar 

  10. Neville HJ, Bavelier D: Neural organization and plasticity of language. Curr Opin Neurobiol. 1998, 8 (2): 254-258. 10.1016/S0959-4388(98)80148-7.

    Article  CAS  PubMed  Google Scholar 

  11. Newman AJ, Bavelier D, Corina D, Jezzard P, Neville HJ: A critical period for right hemisphere recruitment in American sign language processing. Nat Neurosci. 2002, 5: 76-80. 10.1038/nn775.

    Article  CAS  PubMed  Google Scholar 

  12. MacSweeney M, Woll B, Campbell R, McGuire PK, David AS, Williams SCR, Suckling J, Calvert GA, Brammer MJ: Neural systems underlying British sign language and audio-visual English processing in native users. Brain. 2002, 125 (7): 1583-1593. 10.1093/brain/awf153.

    Article  PubMed  Google Scholar 

  13. Hagoort P, Brown C, Groothusen J: The syntactic positive shift (SPS) as an ERP measure of syntactic processing. Language and cognitive processes. 1993, 8 (4): 439-483. 10.1080/01690969308407585.

    Article  Google Scholar 

  14. Osterhout L: On the brain response to syntactic anomalies: manipulations of word position and word class reveal individual differences. Brain Lang. 1997, 59 (3): 494-522. 10.1006/brln.1997.1793.

    Article  CAS  PubMed  Google Scholar 

  15. Hahne A, Friederici AD: Electrophysiological evidence for two steps in syntactic analysis. Early automatic and late controlled processes. J Cogn Neurosci. 1999, 11 (2): 194-205. 10.1162/089892999563328.

    Article  CAS  PubMed  Google Scholar 

  16. Kutas M, Hillyard SA: Reading senseless sentences: brain potentials reflect semantic incongruity. Science. 1980, 207 (4427): 203-205. 10.1126/science.7350657.

    Article  CAS  PubMed  Google Scholar 

  17. van Petten C, Kutas M: The use of event-related potentials in the study of brain asymmetries. Int J Neurosci. 1988, 39: 91-99. 10.3109/00207458808985695.

    Article  CAS  PubMed  Google Scholar 

  18. Kutas M, Van Petten C: Event-Related Brain Potential Studies of Language. Advances in Psychophysiology. Edited by: Ackles PK, Jennings JR, Coles MGH. 1988, Greenwich, Connecticut: JAI Press, Inc, 139-187. 3

    Google Scholar 

  19. Van Berkum J, Hagoort JA, Brown CM: Semantic integration in sentences and discourse: evidence from the N400. J Cogn Neurosci. 1999, 11 (6): 657-671. 10.1162/089892999563724.

    Article  CAS  PubMed  Google Scholar 

  20. Kutas M: In the company of other words: electro-physiological evidence for single word and sentence context effects. Language and cognitive processes. 1993, 8 (4): 533-572. 10.1080/01690969308407587.

    Article  Google Scholar 

  21. Neville HJ, Nicol JL, Barss A, Forster KI, Garrett MF: Syntactically based sentence processing classes: evidence from event-relatedbrain potentials. J Cogn Neurosci. 1991, 3 (2): 151-165. 10.1162/jocn.1991.3.2.151.

    Article  CAS  PubMed  Google Scholar 

  22. Coulson S, King JW, Kutas M: Expect the unexpected: event-related brain response to morphosyntactic violations. Language and cognitive processes. 1998, 13 (1): 21-58. 10.1080/016909698386582.

    Article  Google Scholar 

  23. Friederici AD, Hahne A, Mecklinger A: Temporal structure of syntactic parsing: early and late event-related brain potential effects. J Exp Psychol Learn Mem Cogn. 1996, 22 (5): 1219-1248.

    Article  CAS  PubMed  Google Scholar 

  24. Osterhout L, Mobley LA: Event-related brain potentials elicited by failure to agree. J Mem Lang. 1995, 34: 739-773. 10.1006/jmla.1995.1033.

    Article  Google Scholar 

  25. Osterhout L, Holcomb PJ: Event-related brain potentials elicted by syntactic anomaly. J Mem Lang. 1992, 31 (6): 785-806. 10.1016/0749-596X(92)90039-Z.

    Article  Google Scholar 

  26. De Vincenzi M, Job R, Di Matteo R, Angrilli A, Penolazzi B, Ciccarelli L, Vespignani F: Differences in the perception and time course of syntactic and semantic violations. Brain Lang. 2003, 85 (2): 280-296. 10.1016/S0093-934X(03)00055-5.

    Article  PubMed  Google Scholar 

  27. Mancini S, Molinaro N, Rizzi L, Carreiras M: A person is not a number: discourse involvement in subject-verb agreement computation. Brain Res. 2011, 1410: 64-76.

    Article  CAS  PubMed  Google Scholar 

  28. Münte TF, Szentkuti A, Wieringa BM, Matzke M, Johannes S: Human brain potentials to reading syntactic errors in sentences of different complexity. Neurosci Lett. 1997, 235 (3): 105-108. 10.1016/S0304-3940(97)00719-2.

    Article  PubMed  Google Scholar 

  29. Silva-Pereyra JF, Carreiras M: An ERP study of agreement features in Spanish. Brain Res. 2007, 1185: 201-211.

    Article  CAS  PubMed  Google Scholar 

  30. Friederici AD, Hahne A, Saddy D: Distinct neurophysiological patterns reflecting aspects of syntactic complexity and syntactic repair. J Psycholinguist Res. 2002, 31 (1): 45-63. 10.1023/A:1014376204525.

    Article  PubMed  Google Scholar 

  31. Hahne A, Müller JL, Clahsen H: Morphological processing in a second language: behavioral and event-related brain potential evidence for storage and decomposition. J Cogn Neurosci. 2006, 18 (1): 121-134. 10.1162/089892906775250067.

    Article  PubMed  Google Scholar 

  32. Felser C, Clahsen H, Munte TF: Storage and integration in the processing of filler-gap dependencies: an ERP study of topicalization and wh-movement in German. Brain Lang. 2003, 87 (3): 345-354. 10.1016/S0093-934X(03)00135-4.

    Article  PubMed  Google Scholar 

  33. Clahsen H: Normale und gestörte Kindersprache: Linguistische Untersuchungen zum Erwerb von Syntax und Morphologie. 1988, Amsterdam/Philadelphia: John Benjamins Publishing Co

    Book  Google Scholar 

  34. Friederici AD: Towards a neural basis of auditory sentence processing. Trends Cogn Sci. 2002, 6 (2): 78-84. 10.1016/S1364-6613(00)01839-8.

    Article  PubMed  Google Scholar 

  35. Münte TF, Matzke M, Johannes S: Brain activity associated with syntactic incongruencies in words and pseudo-words. J Cogn Neurosci. 1997, 9 (3): 318-329. 10.1162/jocn.1997.9.3.318.

    Article  PubMed  Google Scholar 

  36. Haupt FS, Schlesewsky M, Roehm D, Friederici AD, Bornkessel-Schlesewsky I: The status of subject-object reanalyses in the language comprehension architecture. J Mem Lang. 2008, 59 (1): 54-96. 10.1016/j.jml.2008.02.003.

    Article  Google Scholar 

  37. Kuperberg GR, Kreher DA, Sitnikova T, Caplan DN, Holcomb PJ: The role of animacy and thematic relationships in processing active English sentences: evidence from event-related potentials. Brain Lang. 2007, 100 (3): 223-237. 10.1016/j.bandl.2005.12.006.

    Article  PubMed  Google Scholar 

  38. Kolk HHJ, Chwilla DJ, van Herten M, Oor PJW: Structure and limited capacity in verbal working memory: a study with event-related potentials. Brain Lang. 2003, 85 (1): 1-36. 10.1016/S0093-934X(02)00548-5.

    Article  PubMed  Google Scholar 

  39. Neville HJ, Mills DL, Lawson DS: Fractionating language: different neural subsystems with different sensitive periods. Cereb Cortex. 1992, 2 (3): 244-258. 10.1093/cercor/2.3.244.

    Article  CAS  PubMed  Google Scholar 

  40. Capek CM, Grossi G, Newman AJ, McBurney SL, Corina D, Röder B, Neville HJ: Brain systems mediating semantic and syntactic processing in deaf native signers: biological invariance and modality specificity. Proc Natl Acad Sci U S A. 2009, 106 (21): 8784-8789. 10.1073/pnas.0809609106.

    Article  PubMed Central  CAS  PubMed  Google Scholar 

  41. Hosemann J, Herrmann A, Steinbach M, Bornkessel-Schlesewsky I, Schlesewsky M: Lexical prediction via forward models: N400 evidence from German sign language. Neuropsychologia. 2013, 51 (11): 2224-2237. 10.1016/j.neuropsychologia.2013.07.013.

    Article  PubMed  Google Scholar 

  42. Osterhout L, Nicol J: On the distinctiveness, independence, and time course of the brain responses to syntactic and semantic anomalies. Language and cognitive processes. 1999, 14 (3): 283-317. 10.1080/016909699386310.

    Article  Google Scholar 

  43. Huynh H, Feldt LS: Estimation of the box correction for degrees of freedom from sample data in randomized block and splitplot designs. J Educ Stat. 1976, 1: 69-82.

    Article  Google Scholar 

  44. Kutas M, Hillyard SA: Brain potentials during reading reflect word expectancy and semantic association. Nature. 1984, 307 (5947): 161-163. 10.1038/307161a0.

    Article  CAS  PubMed  Google Scholar 

  45. Chwilla DJ, Brown CM, Hagoort P: The N400 as a function of the level of processing. Psychophysiology. 1995, 32 (3): 274-285. 10.1111/j.1469-8986.1995.tb02956.x.

    Article  CAS  PubMed  Google Scholar 

  46. Holcomb PJ, Neville HJ: Natural speech processing: an analysis using event-related brain potentials. Psychobiology. 1991, 19 (4): 286-300.

    Google Scholar 

  47. Röder B, Rösler F, Neville HJ: Event-related potentials during auditory language processing in congenitally blind and sighted people. Neuropsychologica. 2000, 38 (11): 1482-1502. 10.1016/S0028-3932(00)00057-9.

    Article  Google Scholar 

  48. McCallum WC, Farmer SF, Pocock PV: The effects of physical and semantic incongruities on auditory event-related potentials. Electroencephalogr Clin Neurophysiol. 1984, 59: 477-488. 10.1016/0168-5597(84)90006-6.

    Article  CAS  PubMed  Google Scholar 

  49. Federmeier KD, Segal JB, Lombrozo T, Kutas M: Brain responses to nouns, verbs and class-ambiguous words in context. Brain. 2000, 123 (Pt 12): 2552-2566.

    Article  PubMed  Google Scholar 

  50. Münte TF, Wieringa BM, Weyerts H, Szentkuti A, Matzke M, Johannes S: Differences in brain potentials to open and closed class words: class and frequency effects. Neuropsychologia. 2001, 39 (1): 91-102. 10.1016/S0028-3932(00)00095-6.

    Article  PubMed  Google Scholar 

  51. Skotara N, Kügow M, Salden U, Hänel-Faulhaber B, Röder B: ERP correlates of intramodal and crossmodal L2 acquisition. BMC Neurosci. 2011, 12 (1): 48-10.1186/1471-2202-12-48.

    Article  PubMed Central  PubMed  Google Scholar 

  52. Meier RP: Why different, why the same? Explaining effects and non-effects of modality upon linguistic structure in sign and speech. Modality and structure in signed and spoken languages. Edited by: Meier RP, Cormier K, Q-P D. 2002, Cambridge: Cambridge University Press, 1-25.

    Chapter  Google Scholar 

  53. Liddell SK, Johnson RE: American sign language: the phonological base. Sign Language Studies. 1989, 64: 195-278.

    Article  Google Scholar 

  54. Muir LJ, Richardson IE: Perception of sign language and its application to visual communications for deaf people. J Deaf Stud Deaf Educ. 2005, 10 (4): 390-401. 10.1093/deafed/eni037.

    Article  PubMed  Google Scholar 

  55. Emmorey K, Luk G, Pyers JE, Bialystok E: The source of enhanced cognitive control in bilinguals: evidence from bimodal bilinguals. Psychol Sci. 2008, 19 (12): 1201-1206. 10.1111/j.1467-9280.2008.02224.x.

    Article  PubMed Central  PubMed  Google Scholar 

  56. Skotara N, Salden U, Kugow M, Hanel-Faulhaber B, Roder B: The influence of language deprivation in early childhood on L2 processing: An ERP comparison of deaf native signers and deaf signers with a delayed language acquisition. BMC Neurosci. 2012, 13: 44-10.1186/1471-2202-13-44.

    Article  PubMed Central  PubMed  Google Scholar 

  57. Welch BL: The generalization of “Student’s” problem when several different population variances are involved. Biometrika. 1947, 34 (1/2): 28-35. 10.2307/2332510.

    Article  CAS  PubMed  Google Scholar 

Download references

Acknowledgements

We are indebted to the individuals from the German Deaf community who volunteered for this research. We also thank Eva Bauch, Simone Bräunlich, Melanie Drewke, Malwine Masius, Lutz Pepping, Janna Protzak, Asha Rajashekhar, Miriam Seebold, Vincent Steffes, Dagmar Tödter, Ivo Weber, Viktor Werner, and Maren Wolfram for their help in conducting the experiments and their contributions to the draft.

Preliminary results of this experiment were presented at the Annual Meeting of the Society for Neuroscience in 2008 (Roder B, Skotara N, Kugow M, Hanel-Faulhaber B: Sensitive phases for learning a first and a second language: An event-related potential study in German and German Sign Language. Washington, U.S: Abstract’38th Annual Meeting of the Society for Neuroscience; 2008). The research was supported by the German Research Foundation (DFG: SFB 538 “Multilingualism”) to BHF and BR. DB is funded by European Research Council (ERC-2009-AdG 249425-CriticalBrainChanges) to BR.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Barbara Hänel-Faulhaber.

Additional information

Competing interests

We hereby declare that we do not have received reimbursements, fees, funding, or salary from an organization that may in any way gain or lose financially from the publication of this manuscript, either now or in the future. No such an organization is financing this manuscript. We do not hold any stocks or shares in an organization that may in any way gain or lose financially from the publication of this manuscript, either now or in the future. We do not hold or are currently applying for any patents relating to the content of the manuscript. We have not received reimbursements, fees, funding, or salary from an organization that holds or has applied for patents relating to the content of the manuscript. We do not have any other financial competing interests. We also declare that there are also no non-financial competing interests (political, personal, religious, ideological, academic, intellectual, commercial) or any other in relation to this manuscript.

Authors’ contributions

BHF, NS and BR designed the experiment. MK and US run the ERP experiments. NS, DB and BR analyzed the data. BHF, NS, US, DB and BR wrote the paper. All authors read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly credited. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hänel-Faulhaber, B., Skotara, N., Kügow, M. et al. ERP correlates of German Sign Language processing in deaf native signers. BMC Neurosci 15, 62 (2014). https://doi.org/10.1186/1471-2202-15-62

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1471-2202-15-62

Keywords