CCHP

The crosslinguistic corpus of hesitation phenomena, collated beginning in 2012

Utterance fluency and perceptual fluency in L2 @ ICPhS 2015

After DiSS at Edinburgh, I took a train ride west one hour to Glasgow to take part in the International Congress for Phonetics Sciences (ICPhS). This was an extremely well-organized conference from start to finish.  The organizers did a good job of keeping everyone informed in advance of the conference as well as choosing a highly competent convention center for the venue:  Even when it became apparent that rooms were exceeding capacity, the organizers and convention center made rapid accommodations. That resulted in some room changes for some presentations, but convention center staff were well-placed and well-informed so that it wasn't at all difficult to find the correct room. Thanks to the organizers!

Psycholinguistics Lab Group at University of Michigan

[Note: This post was published in August 2015 but has been dated in order to reflect the actual timing of the events described here.]

University of Michigan - Hill Auditorium and bell towerIn March 2015, I had the opportunity to go to the US and visit my home state of Michigan to gather some native English speaker data for the CCHP. It was very good, very productive trip. While there, thanks to the efforts of Lorenzo García-Amaya and Nick Henriksen at the University of Michigan, I also had the opportunity to talk about the corpus to the Psycholinguistics Lab Group there.

Presenting about a new java application for second language fluency development

[Note: This post was published in August 2015 but has been dated in order to reflect the actual timing of the events described here.]

I had a really great winter vacation: I spent most of it coding! All right, so that's a bit nerdy, but I finally set myself down to work on a project I'd been thinking about for several years. The basic idea is that I've been wanting to see an application that gives some kind of real-time feedback to a second language learner while they are speaking. There are many applications that can give latent feedback, some as early as moments after a pre-set sentence is spoken. But I can't find any that give immediate feedback (or nearly immediate). Of course, some ideas for using speech recognition technology for second language speech practice are good and the feedback is close to real-time (often 1-2 seconds latency). But I have wanted to see about the possibility of immediate feedback that would be comparable to the kind of audiovisual feedback one would get from an interlocutor during a face-to-face conversation.

Filled pauses in Japanese, Chinese, and English @ Academia Sinica

[Note: This post was published in August 2015 but has been dated in order to reflect the actual timing of the events described here.]

I went to Taiwan in December 2014 where I had the opportunity to join in a workshop on the cross-linguistic study of filled pauses. This was actually connected to a research project I'm engaged in that's being led by Kikuo Maekawa at the National Institute for Japanese Language and Linguistics (NINJAL) in Tokyo. The project is aiming to formalize a typology of filled pauses in Japanese and I'm part of the project to bring in a comparison to English filled pauses. We met in Taiwan with a collaborator there (who brings in a comparison to Chinese filled pauses) to further the research plan and also gave a workshop at the Academia Sinica in Taipei.

Syndicate content