The hypothesis of direct compositionality

For the past several years, my research has been mainly concerned with exploring what I like to call the hypothesis of direct compositionality (see, e.g., Montague, 1973) - which is that the syntax and the semantics work in tandem. Put differently, the syntax (in any theory) can be seen as a system of rules which specify the well-formedness of larger expressions on the basis of the well-formedness of its parts. Direct compositionality is a perfectly natural idea: it says that compositional semantic rules work directly with the syntactic rules, to assign a model-theoretic interpretation to each larger expression as it is "built" (i.e., proven well -formed)in the syntax, where each larger expression is assigned a meaning on the basis of the meaning of its parts. This leads to an extremely simple conception of the organization of the grammar (for some discussion about why this is so much simpler than a view in which the syntax first builds representations which are then "sent" to the semantics, see Jacobson 2002, in Linguistics and Philosophy). Notice that this removes the need for any level of "Logical Form" and hence for any extra set of rules mapping sentences into a Logical Form which is then interpreted.

Direct compositionality is an old idea: it was quite standard in "classical Montague Grammar" and is the view still taken in most current work within the (related) traditions of Categorial Grammar and Type Logical semantics. It has, however, been abandoned in a lot of current work in other theories - but my own feeling is that this abandonment is premature. Direct compositionality is the simplest theory of the interaction of the syntax and semantics - and arguably the most natural and elegant - and so those phenomena which appear to propose a challenge to direct compositionality deserve very careful investigation. This is the general strategy that I've been pursuing in my work - I've been trying to look bit by bit at those phenomena which current "conventional wisdom" takes as necessitating a non-direct-compositional view of things, and see if they can be reanalyzed (without undue complexity). Of course there are plenty of interesting challenges to direct compositionality - especially since there is a large body of work over the last 20 years or so designed to show that we need intermediate levels like LF - so these issues are far from settled.  

back to top

 

Variable Free Semantics

A lot of my work explores the hypothesis of direct compositionality with respect to questions of pronouns and binding. A surprising number of arguments for abstract levels of representation which input the semantics - or more generally against the hypothesis of direct compositionality - center on this domain.  But I’ve tried to show in a series of papers that these arguments are all predicated on a certain view of binding - a view which is actually fairly complex.  This view assumes that “binding” is a relationship between a pronoun/variable/trace and a “binder” and that they must be in a given syntactic relationship.  Moreover, the semantics associated with the “standard” view of binding is rather complex - and makes use of the notion of assignment functions (functions from variable names to real objects).  

I’ve argued that one can instead adopt a “Variable-Free semantics” - an idea which has its roots in what is known as Combinatory Logic and which has been explored in a variety of recent literature in Categorial Grammar. The variable-free program claims that the semantic composition makes no essential use of variables - and neither the syntax nor the semantics needs any indexing conventions. I’ve tried to argue in a variety of work that this is a far simpler conception of the semantics: we don’t need any “extra” stuff like variables and assignment functions - all meanings are the kind of model-theoretic objects that we would expect to find.

Moreover, I’ve tried to show that this view has any number of empirical payoffs - simplifying the analysis of a number of constructions like “paycheck pronouns”, functional questions, and many other things. In the end, then, the variable-free program allows for a view of grammar in which the semantic combinatorics very closely mirror and work directly with the surface syntactic combinatorics, thus providing a theory of the syntax/semantics interaction that uses a minimal amount of machinery.   

Most work on variable-free semantics (including my own) has so far looked mainly at the cases which are usually handled by variables over individuals - but this is just the tip of the iceberg. Variables have been used in semantics for a huge number of different purposes, and it is worth exploring whether all of these uses are amenable to variable-free reanalyses.  

back to top

 

Transderivationality

I’m also interested in the question of “transderivationality”: does the grammar of natural language actually have rules or principles by which certain derivations are blocked (or, “thrown out”) in view of the fact that there exist competing simpler derivations? (This, by the way, was a popular view in the 70’s in Generative Semantics - it was eventually rejected by most researchers but has now resurfaced in a lot of “minimalist program” work.)  Here’s another area where the simplest answer would be no, and so it’s worth taking a serious look at those phenomena which, throughout the years, have been taken to necessitate “transderivational” principles. I’ve also argued in Jacobson (1997) that the notion of “economy” or simplicity driven transderivationality is suspicious in that - if the grammar computes all possible derivations and then picks the “simplest” one - it is actually an accident that the “simplest” one is picked (once all derivations are computed, the one which took fewer steps to compute should have no privileged status over others).  But there are lots of phenomena which look like they support the view that the grammar is somehow driven to economy or simplicity, and so these are worth a careful look.  

back to top