My research aims to answer a fundamental question within phonology: which units of representation provide the best fit to the cross-linguistic phonological patterns that we observe? I focus primarily on investigating the advantages and insights gained by adopting sub-segmental units of representation known as gestures. One of the aims of my research involves the reanalysis of persistent theoretical puzzles in phonology within the framework of gestural phonology. I also work to build on some of the core representational concepts of gestural phonology. This includes modifying and expanding upon the established set of gestural parameters, proposing novel types of relations that exist between gestures, and developing a phonological grammar that operates over gestural representations.

On this page, you'll find some of my written work and conference presentation materials, organized thematically. Click one of the tags below to jump directly to a section.


Harmony is a widely studied phonological phenomenon that continues to spark debate regarding the nature of triggers (segments that initiate harmony), undergoers (segments that are affected by harmony), so-called neutral segments (segments that apparently do not participate in harmony), and directionality (whether a trigger affects preceding or following segments), among other issues. I address all of these issues via formal phonological analysis and computational modeling. The Gestural Harmony Model that I propose provides long-sought solutions to the representation of harmony that are not available to many feature-based analyses.

In addition to the written works cited below, my 2018 USC dissertation, Harmony in Gestural Phonology, is available on Lingbuzz.

Machine Learning

Due to the gradient/continuous nature of many gestural parameters, gestural representations are quite powerful and are able to generate many patterns that may not be derivable within feature-based phonological frameworks. However, in order for a phonological pattern to be crosslinguistically attested, it must not only be derivable within a phonological framework, but also learnable within that framework. In collaboration with Charlie O'Hara, I have developed a learning algorithm to computationally model the acquisition of gestural parameters in order to shed light on which phonological patterns are learnable within the gestural phonology framework.

In addition, I have collaborated with several members of Microsoft Research AI on projects involving improving Transformer neural networks' performance by integrating tensor product representations and morphological segmentation.


In cases of underapplication opacity, a phonological process does not occur when it 'should' have based on its structural description. While earlier rule-based phonological frameworks often dealt with opacity via extrinsic rule ordering, more recent attention has been paid to the inability of many versions of output-driven Optimality Theory and Harmonic Grammar to derive opaque patterns. In collaboration with Charlie O'Hara, I have shown that gestural phonology allows us to represent some seemingly opaque processes as derivationally transparent. Furthermore, we have shown that the mechanisms that drive apparent opacity in gestural phonology correctly predict typological asymmetries between chain shifts and saltations, two types of underapplication opacity.

Phonological Exceptionality

Phonological exceptionality refers to cases in which individual segments or lexical items pattern in an idiosyncratic or unpredictable way with respect to one or more phonological processes. My research in this area gives gestural parameters, including gestural self-(de)activation and blending strength, expanded roles as not only sources of coarticulatory influence but also as phonologically active elements of representation. These gestural parameters can be used to reanalyze cases of phonological idiosyncrasy previously thought to be the result of absolute surface neutralization, derivational opacity, or lexical exceptionality. Portions of this work have been conducted in collaboration with Reed Blaylock. I have also pursued work in collaboration with Brian Hsu analyzing phonological idiosyncrasy within the framework of Gradient Harmonic Grammar.

Computational Complexity

Formal language theory provides a way of hierarchically classifying phonological patterns according to their relative degrees of computational complexity. In collaboration with Charlie O'Hara and Andrew Lamont, I have investigated how the formal complexity of different types of mappings, especially those involving featural and tonal spreading, affects cross-linguistic typological patterns. In particular, we focus on determining the upper bound on the complexity of attested spreading patterns.

Morphological Consonant Mutation

Morphological consonant mutation is characterized by the marking of morphological information via alteration of a root consonant rather than the addition of an overt affixal segment or string. I claim that by adopting gestural representations it is possible to represent morphological consonant mutation as the affixation of a gesture rather than a feature that must dock with a segment in the stem. This both eliminates the need to distinguish between segmental and featural affixation, and makes more restricted typological predictions regarding what kinds of consonant mutations we can expect to observe.

Speech Production

In my work on articulatory phonetics, I use the results of phonetic experimentation (usually taking the form of real-time MRI data) to inform phonological analysis and representation. The work I have conducted in this area is aimed at answering questions regarding the often complex nature of lingual control during the production of coronal consonants, especially liquids. This work was conducted as part of my membership in USC's Speech Production and Articulation kNowledge (SPAN) group and in collaboration with several of its members.