Experimental Linguistics Module – Autumn 2011

Tuesday 2-4pm, Bancroft Building room 102.6

  • Module Description

    The goal of this module is to take students with no prior training in the methods or tools of experimental psychological science and provide them with the theoretical and practical training required to be able to critically engage with the Psycholinguistics literature and to undertake experimental linguistics research themselves. The module will include hands-on training in inferential statistics and hypothesis testing, experimental design, data collection (including training in ethical human subjects research protocols), and data analysis. The module will also engage students in considering strengths and limitations of various kinds of linguistics data, and how multiple sources of data and methods of data collection can be combined to enhance understanding. Students will develop their critical reading skills and gain practice in presenting primary source literature to their peers.

Archive for the ‘speech perception’ Category

experiment results – it worked!

Posted by Linnaea on December 22, 2011

Finally, I have some results to share with you. Sorry it took longer than anticipated – the data set turned out to be larger and more complicated than I had fully anticipated, and required an awful lot of processing to extract out the comparisons we were interested in. There are still lots of analyses I haven’t had time to run, but at least I’ve managed to address the basic questions that motivated this research in the first place.

Read the rest of this entry »

Advertisements

Posted in experiment, speech perception | Leave a Comment »

Weeks 4-6: catch up

Posted by Linnaea on November 3, 2011

I haven’t kept up with regular posts here over the last couple of weeks. Sorry for that. As everyone who has been to class knows, we’ve made quite a bit of progress with our exploration of the literature on non-native speech perception. We read the classic overview paper by Janet Werker that summarizes a whole host of experiments testing the ability of children and adults to perceive speech sound contrasts not present in whatever language or languages they were exposed to from birth. The upshot of this research seems to be that while we are born with the capacity to distinguish all possible speech sounds (useful, since there’s no telling what language we’ll find ourselves learning), this ability declines very rapidly, such that 10 month olds are already appreciably worse than 6 month olds, and 12 month olds worse still.

This body of research mostly relies on behavioural measures, such as high amplitude sucking tests, head turn procedures, and button pressing forced choice discrimination experiments. These measures can tell us that, say, adult speakers of English aren’t very good at discriminating Hindi retroflex and dental stops (since this is not a contrast we make use of in English), but it can’t tell us where/when in the perception process the discrimination trouble arrises. It could be that we can hear the contrast fine, just not make use of it to do a task.

To investigate this further, we looked at a series of experiments that rely on the <a href="http://en.wikipedia.org/wiki/Mismatch_negativity"Mismatch Negativity or Mismatch Field, an evoked neural response associated with the detection of anomaly. For instance, we read Nätäänen et al (1999) who showed that Finnish speakers were unable to detect an anomalous /õ/ vowel sound against a background of /ö/s, but that Estonian speakers had no trouble perceiving the difference. Why? Estonian speakers make use of an /õ/ /ö/ contrast in their language, but Finnish speakers don’t. And the failure to find a mismatch response for the Finnish speakers suggests that Finnish speakers just genuinely do not reliably perceive the difference in sounds, even at the early, low level, automatic stages in processing indexed by the MMN.

Finally, we turned to phonotactics. It’s one thing to say that we lose the ability to perceive a sound contrast, such as /ö/ vs. /õ/, if we never hear it. But what about perceiving sequences of sounds, where the individual segments are all phonemes in our language, but the specific arrangement of those segments is not possible. Japanese, for instance, does not allow consonant clusters in syllable onsets, or allow any but a very few consonants in coda position. The result is that English words like ‘tennis’ (phonetically /tE-nIs/ are borrowed as ‘tEnIsU’ in Japanese (rough transcription), with an additional ‘u’ added word finally to turn the ‘s’ into an onset rather than an illegal coda.

In week 5, we read a series of papers all looking at what effect this phonotactic constraint (the prohibition against non-nasal codas and complex onsets) has for Japanese speakers when they listen to nonsense words like `ebzo’. Dupoux et al (1999), in a set of 4 experiments, show that Japanese speakers are significantly worse than French speakers at discriminating between nonwords like `ebzo’ that violate the syllable structure rules of Japanese, and nonwords like `ebuzo’ that conform to those rules. Apparently the process of epenthesis that is reflected in the loan word adaption we see in ‘tEnIsu’ is not merely due to restrictions on what Japanese speakers can say, but in fact may arrise entirely due to an inability of Japanese speakers to even hear the illegal word forms accurately in the first place.

This suggests that language specific grammatical factors have an impact on very early stages of speech perception. The paper we read by Dehaene et al (2000) confirms this suspicion. Using a slightly modified version of the Mismatch paradigm developed by Nätäänen, Dehaene et al show that Japanese speakers don’t exhibit a mismatch response to deviant `ebuzo’ following multiple `ebzo’ standards.

Against the backdrop of all of this research, we’ll be conducting our own class research project using a forced choice discrimination paradigm. We’ll be comparing the abilities of speakers of English and Hindi/Urdu to perceive the difference between (a) singleton and geminate consonants & (b) coronal, retroflex, palatal and velar consonants. Hindi, Urdu and the other Western Hindi languages use gemination phonemically, whereas in English, the only time we get gemination is when we have two homorganic oral consonants across a word boundary as in mint tea vs. minty. Although this difference is systematic, it’s a much less important cue to word meaning than the singleton/geminate contrast is in Hindi, as lexical stress placement also distinguishes these pairs.

We’re also manipulating consonant voicing as a baseline, since this is a feature that both English and Western Hindi languages make use of phonemically.

Thus we have one feature (place of articulation) that we are very sure should be difficult for English speakers to discriminate, because U.K. English simply doesn’t have retroflex or palatal stops, one feature that English speakers should have no trouble discriminating (voicing), and one feature that should be in the middle (gemination). Hindi and Urdu speakers should have excellent discrimination abilities on all three dimensions. We’ll be running this experiment over the next month, so stay tuned for the results.

Posted in class.summary, speech perception | Leave a Comment »

Week 3: Levels of Representation in Speech Processing

Posted by Linnaea on October 18, 2011

After a couple of weeks of basic background and organisation, we finally got in to the core of the course this week. Starting with Tuesday’s class, and for the next 3 weeks, we’ll be focusing on the issue of speech perception and linguistic sound systems. Today we began with trying to appreciate the big picture issue: how do we reliably, rapidly, and apparently effortlessly, convert the sound waves that hit our ears and find their way into our brains into linguistic information? We talked about the fact that there is clearly something special about human brains that allows this to happen, since the same sound waves can hit the ears of your cat, or be picked up by a microphone attached to a computer, without the same resulting comprehension. A big, live, open research question is: what is it that’s special? I began the class with a short slide show providing some basic background about the human auditory system and sketched a basic story about how we start turning sound waves into neuro-electrical impulses that are interpretable by the brain.

Then we turned the class over to Anisha Mohammed, Emma Swan and Janusz Baginski, who lead a very audience-participation-full discussion of the two papers we read. Through a combination of games, small experiments, videos and slides, Anisha, Emma and Janusz tried to clarify some of the important basic distinctions between phonetics and phonology, and between conscious and unconscious knowledge of language.

We didn’t delve too deeply into the specifics of the experiments Phillips discusses, but I hope everyone is at least starting to feel like some aspects of speech perception are a little more familiar and accessible. In week 4 we’ll focus on the specific issue of phoneme perception, both from a developmental and neurobiological perspective.

Posted in class.summary, speech perception | Leave a Comment »