What is special about logical words? Do they have distinctive syntactic or semantic features, in natural language, or in the formal languages logicians devise for their use? From a natural language perspective, how could we get to know what they mean, since obviously pointing and joint attention won’t get us very far on that ground? These are the main questions this course will address, presenting classical as well as recent work on logicality.
Standard characterizations of logical expressions are usually worked out within a given logical tradition, be it in terms of semantic properties (such as invariance), or proof-theoretic properties (such as harmony, schematicity, or complete axiomatizability). Back in the forties, Carnap had an interesting insight that logical expressions could be characterized in terms of how rules and meanings interact: they could be those expressions such that their interpretation is completely fixed by the rules which govern their use. In this course, we shall use this Carnapian intuition as our lead, in order to show how semantic and proof-theoretic approaches to logicality can be combined in order to understand both what logical notions are and how we get to understand them.
The course is meant to be of interest to (at least) logicians, philosophers, and linguists. A basic liking and mastery of formal methods is required to enjoy the course, but we shall do our best to keep it self-contained with respect to more advanced model-theoretic or proof-theoretic methods.