Commentary on Jackendoff
Abstract: 55 words
Main Text: 900 words
References: 76 words
Total Text: 1031 words
SYLVAND Q. Benjamin
Contra Jackendoff, we argue that, within the parallel architecture framework, the generality of language does not require a rich conceptual structure. To show this we put forward a delegation model of specialization. We find Jackendoff's alternative, the subdivision model, insufficiently supported. In particular, the computational consequences of his representational notion of modularity need to be clarified.
In Jackendoff's framework, understanding the meaning of a sentence consists
in constructing a representation in a specific cognitive structure, namely
Conceptual Structure (CS). CS is not dedicated to language, though. It is the
structure that carries out most of our reasoning about the world. According to
Jackendoff, this follows from what we call the Generality of Language Argument
(1) Language allows us to talk about virtually anything.
(2) Every distinct meaning should be represented within CS.
(3) CS must contain our knowledge about everything it represents.
(4) Hence CS contains large bodies of world knowledge: CS is "rich".
For instance, if the difference between "to murder" and "to assassinate" is that the second requires a political motive, then CS contains knowledge about what it is to be a political motive (Jackendoff 2002: 286).
GLA excludes the idea that there is a specifically linguistic level of semantics, containing only a "dictionary meaning" as opposed to "encyclopedic information" (Jackendoff 2002: 285). It also excludes a minimal view of CS. We call minimal a CS that is able to represent all distinct meanings, but is not able to carry out computations other than the logical ones. A minimal CS could represent the meanings of "x is an elephant" and "x likes peanuts", but would not be able to infer the second from the first.
We think that GLA is wrong: the generality of language is compatible with a
minimal CS. Indeed, it is a viable possibility within Jackendoff's general
architecture of the mind. Consider the sentence: "The elephant fits in the
mailbox." To know that it is wrong is to represent its meaning and judge it to
be false. Jackendoff would say that these two steps are carried out by different
structures, namely CS and spatial structure (SpS). Since only CS interacts
directly with language, the sentence has to be translated into CS. From there it
can in turn be translated into a representation in SpS. This would be done by
dedicated interfaces. SpS is the place where the sentence is found false, for it
is impossible to create a well-formed spatial representation of a elephant in a
mailbox. We regard this as an instance of a delegation model:
(DM) Domain-specific computations are carried out outside CS, but their result is represented in CS, and may thus be expressed in language.
In this case the computation is very simple. It consists in checking whether an adequate SpS representation can be formed. Nevertheless it is done outside CS. CS only represents its result, namely that the elephant does not fit in the mailbox.
It is a priori possible that DM applies to all the computations involved in our knowledge about physical objects, biological kinds, other minds and so on. The resulting CS would be minimal. Hence premise (3) is false: CS could represent meanings without containing world knowledge.
Jackendoff does not address this question. Instead, he directly proposes an
alternative model for specialization. For instance, he takes social cognition as
involving a specialized mental structure. But he claims that this is a
sub-structure of CS, a "sub-specialization" (Jackendoff 1992, chap. 4). We call
this the subdivision model:
(SM) Domain-specific computations are carried out within parts of CS, and can thus be expressed in language.
If most of our reasoning about specific domains has to be carried out within parts of CS, then CS has to be rich. But why should it be so? Jackendoff could put forward two distinct hypotheses.
The computational unity hypothesis claims that CS is a computational module, with a unique processor, and that sub-specializations are representational modules, that is, knowledge bases about specific domains. 1 On this hypothesis, domain-specific inferences are construed as logical inferences based on domain-specific premises and effected by a single processor, and this is why they are part of CS. However, such a claim is far from being uncontroversial. Many cognitive psychologists argue that putative "sub-specializations" such as Theory of Mind carry out their computations independently of each other in a relatively autonomous way, and are possibly situated in distinct, dedicated neural structures (Leslie 1994, Segal 1996). Moreover, if the processor were damaged, it seems that one would lose all propositional computational abilities at once. But this pathology has not been observed.
A weaker hypothesis is that of a unique representational format. Jackendoff (2002: 220) seems to endorse it. It merely claims that all sub-specializations of CS share a common, propositional format and that all corresponding computations are of a quantificational-predicational character. Their computations need not be carried out by a common processor. However, we do not think that this view has any more plausibility than the hypothesis that some sub-specializations have their computations carried out in sui generis formats that are designed for the tasks that they solve. Our understanding of each other's minds plausibly involves propositional representations but this may be the exception rather than the rule. Moreover, it is not clear whether CS would on this view constitute a module in any interesting sense, nor whether the hypothesis really differs from generalized delegation and a minimal CS.
To conclude, within Jackendoff's architecture of the mind, the generality of language is compatible with either a rich or a minimal CS. The choice of the former requires that the computational consequences of Jackendoff's representational notion of modularity be at the very least clarified.
1.For further discussion of representational (or intentional) and computational modularity, see Segal (1996).Back to text
Jackendoff, R. (1992). Languages of the mind. Cambridge: MIT
Jackendoff, R. (2002). Foundations of language. New York: Oxford
Leslie, A.M. (1994). ToMM, ToBy, and agency: core architecture and domain
specificity. In L. Hirschfeld, & S. Gelman (eds.), Mapping the mind
(pp. 119-148). New York: Cambridge University Press.
Segal, G. (1996). The modularity of theory of mind. In P. Carruthers, &
P.K.Smith (eds.), Theories of theories of mind (pp. 141-157). Cambridge:
Cambridge University Press.
Thanks to Roberto Casati for setting up a workshop on Ray Jackendoff's work, and to the latter, for discussing issues related to the present argument. Damián Justo acknowledges support by CONICET and Fundación Antorchas, Argentina.