Wednesday, December 3, 2008

Dual Stream Model of Speech/Language Processing: Tractography Evidence

The Dual Stream model of speech/language processing holds that there are two functionally distinct computational/neural networks that process speech/language information, one that interfaces sensory/phonological networks with conceptual-semantic systems, and one that interfaces sensory/phonological networks with motor-articulatory systems (Hickok & Poeppel, 2000, 2004, 2007). We have laid out our current best guess as to the neural architecture of these systems in our 2007 paper:


It is worth pointing out that under reasonable assumptions some version of a dual stream model has to be right. If we accept (i) that sensory/phonological representations make contact both with conceptual systems and with motor systems, and (ii) that conceptual systems and motor-speech systems are not the same thing, then it follows that there must be two processing streams, one leading to conceptual systems, the other leading to motor systems. This is not a new idea, of course. It has obvious parallels to research in the primate visual system, and (well before the visual folks came up with the idea) it was a central feature of Wernicke's model of the functional anatomy of language. In other words, not only does the model make sense for speech/language processing, it appears to be a "general principle of sensory system organization" (Hickok & Poeppel, 2007, p. 401) and it has stood the test of time.

So, all that remains is to work out the details of these networks. A new paper in PNAS by Saur et al. may provide some of these details. In an fMRI experiment, they used two tasks, one that they argued tapped the dorsal stream pathway (pseudoword repetition), and the other the ventral stream pathway (sentence comprehension). The details of the use of these tasks leave something to be desired in my view, but they did seem to highlight some differences, so I'm not going to quibble for now. Here's the activation maps (repetition in blue, comprehension in red):


Notice the more ventral involvement along the length of the temporal lobe (STS, MTG, AG) for comprehension relative to repetition, as well as the more posterior involvement in the frontal lobe for repetition.

They then used peaks in these activations as seeds for a tractography analysis using DTI. Here is a summary figure showing the distinction between the two pathways (red = ventral, blue = dorsal).



The authors localize the white matter tract of the dorsal pathway as being part of the arcuate/superior longitudinal fasciculi and the tract of the ventral pathway as part of the extreme capsule (not the uncinate).

I haven't looked closely at the details of the analysis (I would love to hear comments!), but this sort of study seems just the ticket to getting us closer to delineating the functional anatomical details of the speech/language system.

References

G Hickok, D Poeppel (2000). Towards a functional neuroanatomy of speech perception Trends in Cognitive Sciences, 4 (4), 131-138 DOI: 10.1016/S1364-6613(00)01463-7

G Hickok, D Poeppel (2004). Dorsal and ventral streams: a framework for understanding aspects of the functional anatomy of language Cognition, 92 (1-2), 67-99 DOI: 10.1016/j.cognition.2003.10.011

Gregory Hickok, David Poeppel (2007). The cortical organization of speech processing Nature Reviews Neuroscience, 8 (5), 393-402 DOI: 10.1038/nrn2113

D. Saur, B. W. Kreher, S. Schnell, D. Kummerer, P. Kellmeyer, M.-S. Vry, R. Umarova, M. Musso, V. Glauche, S. Abel, W. Huber, M. Rijntjes, J. Hennig, C. Weiller (2008). Ventral and dorsal pathways for language Proceedings of the National Academy of Sciences, 105 (46), 18035-18040 DOI: 10.1073/pnas.0805234105

11 comments:

Anonymous said...

Thanks for the info. I will read the paper and come back for commenting.

Anonymous said...

hey,
thanks for the post. i have 2 questions (sorry if they are somewhat off, i don't really "do" speech, except in the non-sciency way :().
1. do you assume/expect the interaction between the two streams during the processing of words/sentences? e.g., could (overt or covert/preparatory)articulation of a word influence recognition on-line (a good example where this might happen could include homonyms)?
this question may go a little in the simulation-direction, i am not sure. i am also not sure if this interaction, if present, would imply just co-activation (spill-over) of activity in both streams in relation to meaningful speech or if it would be functionally "more relevant".
2.comment on the logic of the conclusion that having two streams is inevitable given the two mentioned premises. well, i am not sure if these two premises are enough for the conclusion, actually... because, you could also imagine having 1 stream with 3nods: from sensory - through conceptual - to motor system. in this case you would have separate conceptual and motor systems and they would differ, but still 1 stream would suffice. of course, you can say that with e.g., pseudowords you don't have the conceptual system at all, but yeah...
(sorry; it may be somewhat of a mean comment - it is not intended as such)

Greg Hickok said...

Thanks for the comments, Andreja.

1. Yes, I assume that there is some degree of interaction between the streams. Exactly how they interact is an empirical question that is worth investigating. I do believe that motor "simulations" (forward models if you like) of heard speech can influence perception in a top-down kind of way, which may be particularly useful in noisy listening conditions. Notice that this is very different than saying that motor simulations ARE the perceptions.

2. I didn't take your 2nd comment in a negative way at all. You raise a good point. It is logically possible that the pathway from the sensory system to the motor system must pass through the conceptual system. As you correctly point out, however, this can't be true because we are perfectly capable of verbatim repetition of meaningless speech. In addition, the effects on speech production of (i) delayed auditory feedback, (ii) artificially altered feedback (e.g., John Houde's work), (iii) late onset deafness, (iv) the tendency to pick up speech characteristics of those around us (accents), and (v) the double-dissociation of speech comprehension and repetition abilities (transcortical sensory vs. conduction aphasia), all attest to the existence of a relatively "direct" connection between speech perception and production systems, i.e., one that is not semantically mediated.

Have a look at the writings of Wernicke and Lichtheim for some early discussion of exactly these issues. There is good reason why these authors postulated a dual stream model!

Anonymous said...

thanks for the reply.
1. noticed :)

2. i don't really question the conclusion (nevertheless, thanks for listing the arguments - good to learn); i just missed another premise in addition to the ones posted which would make the conclusion necessarily follow from the premises without adding more info.
and, actually, i just learned on your blog that wernicke had the dual stream idea before it appeared in the visual domain. it's cool :)

Greg Hickok said...

What I meant by "that sensory/phonological representations make contact both with conceptual systems and with motor systems" was that they make ~direct contact. I wasn't clear about this, so thanks for calling me on it.

Ellen Lau said...

ambitious analysis, but this stuff still feels like black magic to me. also, i'm not so versed on the speech literature, but is it standard to use a subtraction of repeating pseudowords - repeating words to isolate auditory-motor mapping? isn't there going to be a lot of differences in automatic semantic processing also? the n400 component, often associated with semantic processing, is sometimes bigger for pseudowords than words, maybe reflecting activation of a bunch of non-matching candidates--seems like you could get the same thing in fmri.

Greg Hickok said...

DTI, which is based on the direction of water diffusion in tissue, seems to be pretty robust as a measurement and does a good job of identifying prominent white matter tracts. It gets tricky when you try link these things to specific cortical areas and functions because, in my limited experience, the position and size of the seed is critical and touchy. Further, it's not the case -- DTI experts, correct me if I'm wrong here! -- that you can use a seed ROI that is limited to grey matter. Rather, you have to grow the seed to include underlying white matter where the diffusion signal is measurable. This is kind of important because you can't directly assess where a chunk of functionally activated grey matter projects.

Given this, in the context of the Saur paper, can we say with some certainty that the "dorsal tract" terminates in the posterior planum temporale, as I would predict? No. Can we say with some certainty there are two largely distinct fiber tracts that connect the posterior temporal lobe with frontal structures? Yes.

Regarding the pseudoword vs. word task. I mentioned in my comment that their task left something to be desired. This is exactly what I was alluding to. I'm not convinced that this kind of subtraction unambiguously highlights "dorsal" vs. "ventral" pathways. On the other hand, the pattern of activation they found looks reasonable given what else we know about these systems, so they probably hit on something workable.

Thanks for your comment!

Anonymous said...

Hey Greg (as you seem to have drawn the short straw in Q&A), I have a slightly related question: Would you say the entire arcuate fasciculus pathway conveys phonetic information? Or, are you more inclined towards the AF being divided into two segments, with one terminating in the STG and the other in the MTG?

Also, as I'm having trouble accessing this paper for some reason, do Saul et al delineate between these two accounts?

On a slightly lighter note: great blog guys, keep up the good -- and, more importantly, informative -- work.

Anonymous said...

Hello,
Dorothee and I have been talking about this already. I think she's done a terrific job!

Despite the obvious interaction in "normal" language processing tasks, a study of Jefferies et al. (2005) suggests contribution of semantic representations to nonword repetition. SD patients were better repeating nonwords which were similar to words they were able to produce and comprehend. Thus, the integrity of semantic representations (located in the anterior portions of the TL?) had a direct influence on the more "dorsal" task of nonword repetition.

Kind regards,
Tobias Bormann, Freiburg


Jefferies, E., Jones, R.W., Bateman, D., & Lambon Ralph, M.A. (2005). A semantic contribution to nonword recall? Evidence for intact phonological processes in semantic dementia. Cognitive Neuropsychology, 22 (2), 183-212.

BSteffi said...

May inquire to the functional anatomy in sign language repetition and comprehension comparative to neurological processing in speech as pertaining to the Dual Stream Model. The title includes speech/language. Is Signed Language included in the "language" part of the title?

Greg Hickok said...

The dual stream concept, functionally speaking, holds of sign language as well. It holds of all of sensory processing for that matter (see Milner and Goodale's work in the visual domain). The neural circuits will vary a bit according to the particular input and output modalities. This is what has been found in sign language.