Generative Lexicon Theory (Pustejovsky, 1995), since 2000, has drawn increasingly on the findings of corpus linguistics and distributional semantic analysis and procedures (Pustejovsky and Jezek, 2008, Pustejovsky and Rumshisky, 2008, Jezek and Quochi, 2010, Jezek and Vieu, 2014). This has brought about new methods of integrating empirical analysis and theoretical modeling, while facilitating the applicability of GL to NLP tasks such as Qualia extraction, compound classification, event identification, and metonymy resolution. Our goal is to acquaint the student with the basic assumptions and components of GL and motivate theoretical decisions through evidence-based analysis.
In the first lecture, we review the motivations behind GL and the notion of a distributed compositional model of language meaning. We sketch out the basic assumptions underlying GL theory. For lecture two, we examine the qualia structure and its role in differentiating the semantic micro-structure of word meaning. Lecture three focuses on argument distinctions and typing and examines default realization of the different types of arguments. Lecture four examines event structure, and discusses event type shiftings as attested in the corpus. Finally, in lecture five, we look in detail at GL’s compositional mechanisms of coercion and co-composition. We situate this last lecture in the context of data from large linguistic corpora, and investigate the computational consequences of the GL architecture for modeling compositionality and determining meaning in context. There will be labs associated with the lectures, relating to corpus evidence and analytics relating to qualia extraction, compound interpretation, coercion, and event typing.