Word Recognition &
Auditory Perception Lab

Villanova University | Department of Psychological & Brain Sciences

Word Recognition &
Auditory Perception Lab

Villanova University | Department of Psychological & Brain Sciences

Word Recognition &
Auditory Perception Lab

Villanova University | Department of Psychological & Brain Sciences

Word Recognition &
Auditory Perception Lab

Villanova University | Department of Psychological & Brain Sciences

Word Recognition &
Auditory Perception Lab

Villanova University | Department of Psychological & Brain Sciences


WRAP Lab

Speech Perception and Language Lab at Villanova University

Welcome to the Word Recognition & Auditory Perception Lab (WRAP Lab)! Our group studies how human listeners recognize speech and understand spoken language. Investigating language processing as it happens is central to our approach, and we use a combination of computational, cognitive neuroscience, and behavioral techniques to study these processes.

Find out more about our research on this site. Thanks for stopping by! — Joe Toscano


NEWS & UPDATES

Here's what we've been up to lately

February 2019

Welcome Dr. Sarrett!

The WRAP Lab welcomes postdoctoral fellow Dr. McCall Sarrett! McCall joins us from the University of Iowa, where she completed her Ph.D. in Neuroscience. Prior to her graduate work at Iowa, McCall completed her B.A. in Neuroscience & Speech Perception at the University of Tennessee. Her research examines the cognitive and neural processes subserving speech perception, lexical competition, and second language acquisition, and uses machine learning techniques to decode speech information from neurophysiological data. Welcome McCall!

August 2020

Review paper on the neuroscience of speech perception

Laura Getz and Joe Toscano published a review paper in WIREs Cognitive Science that discusses recent work demonstrating two key principles in speech perception: (1) gradiency (i.e., listeners are highly sensitive to fine-grained acoustic differences in the speech signal), and (2) interactivity (i.e., higher-level linguistic information feeds back down to influence early perception). The paper describes how recent work investigating the time-course of spech perception has provided evidence for both gradiency and interactivity in spoken language processing.

July 2020

Identifying acoustic cues using tools from graph theory

Anne Marie Crinnion, Beth Malmskog, and Joe Toscano recently published a paper in Psychonomic Bulletin & Review that uses techniques from graph theory to identify acoustic cues for speech sound categorization. This approach allows us to find a balance between models that are too complex (e.g., including all possible cues) and models that are too simple (i.e., not including enough cues to account for differences between talkers). This work is the result of Anne Marie's research in the lab during her undergraduate studies, which was supported by a Herchel Smith Undergraduate Research Fellowship from Harvard University.

January 2020

Long-lasting gradient actviation of referents

In a recent paper published in Journal of Memory and Language, WRAP Lab grad student Ben Falandays, along with our collaborator Sarah Brown-Schmidt from Vanderbilt, present work demonstrating that listeners maintain gradient activation of referents over extended time periods. Listeners heard short discourses describing male and female refernets who were indicated by a pronoun varied along a continuum from he to she. Even when the identity of the referent was not disambiguated until several seconds after the pronoun, listeners continued to maintain graded activation of the two possible referents. These results suggest that gradient information from the speech sound signal persists at higher-levels of linguistic representation.

April 2019

Electrophysiological evidence for top-down lexical influences on early speech perception

Is In a new paper appearing in Psychological Science, WRAP Lab post-doc Laura Getz investigated whether information about words feeds back to affect low-level speech perception. For example, seeing the word "amusement" leads to the expectation that the word "park" should come next. Are you more likely to perceive an ambiguous speech sound between "bark" and "park" as a "p" when you hear it in the context of "amusement"? We used the event-related potential (ERP) technique to measure how the brain responds to these ambiguous sounds at early stages of speech perception (100 ms after hearing the sound). We found that context does in fact change perception of ambiguous speech sounds, helping to address a long-standing debate regarding the influence of top-down information on perception.


CONTACT INFO

Please contact us if you would like to learn more about our research, request a copy of a paper, are interested in joining the lab, or have any other questions. Our email address is wraplab@villanova.edu.

Scheduled to participate in a study? The main lab is located in Tolentine Hall, Room 231. Some of our experiments also take place in other labs in our building. If you're scheduled to participate in an experiment but aren't sure where to go, please come to the main lab and a research assistant will meet you there!

Location: 231 Tolentine Hall
Phone: +1 610.519.4755
Email: wraplab@villanova.edu
Facebook: VU WRAP Lab
Villanova University
Department of Psychological and Brain Sciences
800 E Lancaster Ave
Villanova PA 19085