My entire career has been dedicated to supporting L2 learning and teaching, either directly in the classroom or by conducting research. This page lists the products developed as part of the collaborative projects I have been fortunate to have an active role in. These products help advance research for L2 learning and teaching and enhance teaching practices directly in the classroom.
Golden Speaker Builder (GSB) is a free online interactive tool that allows L2 learners to build a personalized pronunciation model that mirrors their own voice but with a native accent (i.e., a “golden speaker”).
Learners are able to create their own account, and build and save their golden speaker model. The provided voice produces intelligible speech with the voice quality of the L2 learner, and the prosody of the source native speaker normalized to the pitch range of the L2 learner. Learners can use the model voice and follow the exercises provided in order to practice their pronunciation.
Our research so far shows that practicing with GSB leads to improved fluency and comprehensibility.
Vowel Viewer (VV), the main product of my dissertation project, is a tool for real-time vowel plotting. It was developed in collaboration with Dr. Ricardo Gutierrez-Osuna and Anurag Das of Texas A&M University’s Department of Computer Science and Engineering.
VV performs extraction of the acoustic information necessary to determine where a user’s vowel falls within a vowel plot in real time. This visualization occurs within a panel that includes native speakers’ production in form of colored dots where each vowel is represented by a unique color. The tool also includes an audio panel with samples of exemplar word productions to provide native speaker auditory input.
My research showed that the use of VV results in improved vowel production.
OneStopEnglish corpus is a collection of texts (n = 189), each written at three reading levels (N = 567) - specifically, articles were sourced from The Guardian newspaper and rewritten by teachers to suit three levels of adult ESL learners (elementary, intermediate, and advanced).
Original articles from the website consisted of PDF files containing the article text, some pre/post test questions, and other additional material. Our corpus includes clean-text versions of the articles, excluding irrelevant material and making it ready for further analysis.
The corpus can be used for future research in two areas: automatic readability assessment and automatic text simplification.
The texts can also be used in ESL/EFL teaching, as materials for learners at different proficiency levels.
The corpus is released under a CC by-SA 4.0 license.
This non-native English speech database includes recordings from twenty-four (24) non-native speakers of English whose L1s are Hindi, Korean, Mandarin, Spanish, Arabic and Vietnamese, and each L1 contains samples from two male and two female speakers.
The corpus contains one hour of read speech (n = 24) and samples of un-scripted speech ( from which we generated orthographic and forced-aligned phonetic transcriptions.
We also manually annotated 150 utterances per speaker to identify three types of mispronunciation errors: substitutions, deletions, and additions, making it a valuable resource not only for research in voice conversion and accent conversion but also in computer-assisted pronunciation training.
The corpus can also be used for ESL/EFL teaching, as examples of advanced-proficiency non-native speech.
The corpus is released under the CC BY-NC 4.0 license.
SPLIS Webinar: Technology for L2 Pronunciation Teaching and Learning: Focus on Visual Feedback
This webinar was held as part of TESOL International's SPLIS (The Speech, Pronunciation, and Listening Interest Section), which focuses on all aspects of oral/aural skills in English language teaching. I was invited to hold a webinar by the current chair, Dr. Joshua Gordon. With my expertise in CALL, and specifically CAPT (computer-assisted pronunciation training), I decided to focus the webinar on the technology for L2 pronunciation. Specifically, the webinar shared freely available tools and activities for which these can be implemented in a class, and then moves into arguing for the usefulness of visual feedback for L2 pronunciation instruction. The webinar also calls for inclusivity in L2 pronunciation teaching and research.
When I was invited to present a webinar on this topic, I knew immediately that I wanted to include a PhD student as a co-presenter. I wanted to use this opportunity for mentorship of a new scholar in the field and provide them with additional knowledge of the topic and the experience of collaborating on a webinar. Therefore, I invited soon-to-be-Dr. Mutleb Alnafisah to work with me on developing this webinar for TESOL's wide audience.