Memory, learning and metacognition

Mémoire, apprentissage et métacognition

 

Pierre Jacob

 

Institut des sciences cognitives,

CNRS UPR 9075,

Université Claude Bernard,

8, avenue Rockefeller,

69373 Lyon Cedex 08.

Tél: 04 78 77 72 87.

Fax: 04 78 77 72 86.

 

 

Résumé

 

Un être humain construit sa représentation du monde à partir de quatre sources fondamentales : la perception, la mémoire, l'inférence et la communication. Faute de capacités d'apprentissage, un système qui traite de l'information ne serait pas un système cognitif authentique. Faute de mémoire, un système serait incapable d'apprendre. Les sciences cognitives nous ont appris que la mémoire humaine doit être fractionnée en plusieurs systèmes spécialisés. De surcroît, ce qui est distinctif de l'architecture cognitive humaine, c'est que les humains possèdent des capacités linguistiques et métareprésentationnelles ou métacognitives. J'explore certaines conséquences de l'existence des capacités métareprésentationnelles humaines sur l'apprentissage humain et sur l'organisation de la mémoire humaine. 

 

 

Introduction: what is a representation?

            I am a philosopher of mind and, like many other philosophers of mind such as Dretske [1], [2], [3], Fodor [4], [5] and Millikan [6], [7], my main current interest is to find a naturalistic basis for the fact that human beings have minds. I happen to think that the most promising approach for such a naturalistic project is to endorse a representational view of the mind. According to this representational viewpoint (which I develop in Jacob [8]), mental facts are primarily representational facts and an individual's mind is primarily a device whose job is to build representations. The question then arises: What is a representation?

            I start with the notion of information: the length of a metal bar carries information about temperature because it covaries with temperature; but it does not represent the temperature. Unlike a metal bar, a thermometer does not merely carry information about temperature; it represents the temperature. What is the difference? The difference is that, unlike a metal bar, a thermometer may misrepresent the temperature. The reason why a thermometer, unlike a metal bar, may misrepresent the temperature is that the former has, while the latter does not have, a function: its function is to indicate the temperature. A thermometer may therefore misrepresent the temperature because it may misfunction. Only a device with an indicator function can misrepresent and therefore represent. It may misrepresent the temperature by failing to indicate what it is its function to indicate.

            Of course, thermometers - and artefacts more generally - are representations because they are made so by human engineers from whom they get their indicator functions. But I assume that states of an animal's nervous system too are representations. They too may misrepresent by failing to indicate what it is their function to indicate. The states of an animal's visual, auditive or olfactory system are not, however, representations in virtue of something done by a human engineer. The states of an animal's nervous system get their indicator functions - if and when they do - from one of two main sources: from evolution by natural selection and from learning. Broadly speaking, the states of sensory mechanisms in humans and non-human animals derive their indicator functions from evolution by natural selection. Whereas the states of an individual's higher central (or conceptual) cognitive processes derive their indicator functions from learning. The point is that both evolution by natural selection and learning are selective processes. And functions arise from selective processes. In other words, sensory (or perceptual) representations have non-conceptual content; higher cognitive representations have conceptual content. Furthermore, as I will argue shortly, what is distinctive of human cognition is that humans have language and they can build what I will call metarepresentations. Not only can they build complex representations of their environment, but they can express them linguistically and communicate them verbally to their conspecifics. In addition, they can build representations of representations.

 

1. The role of memory in one's representation of the world

            I assume that like many other animals, human beings derive their representation of their surroundings from two primary sources: perception and memory. If an animal had no sense organ, he could not derive any knowledge of his environment. In 1690, John Locke [9] wrote that "memory is as it were the storehouse of our ideas... a repository to lay up those ideas". Without a memory buffer to store information for later use, a physical system might perhaps be said - like a photoelectric cell - to process information, but it could not qualify as a cognitive system. It could not think. In particular, it could not categorize, conceptualize or recognize an incoming piece of information as a new token of the same type of information. It could not extract information from its sensory experience for conceptual use. In other words, it could not learn. And arguably, unless a system can learn, it is not a cognitive system.

            Of course, cognitive science has taught us that human cognition involves many different memory systems some of which we share with other organisms and some of which we don't. I'll just mention a few pairs of such systems: long term memory vs. working memory; implicit vs. explicit memory; procedural vs. declarative memory; episodic vs. semantic memory. Now, for any of the above systems to be a memory system, it must at least have three kinds of capacities or internal structure: it must be capable of encoding, storing and retrieving information. In what follows, I will concentrate on abilities which seem to me distinctly human.

            Following the work of Baddeley [10], much recent research into human memory has been devoted to establishing the demonstration that human memory involves at least two separate subsystems: a working memory and a long-term memory. And much of the evidence in favor of this distinction comes from the study of patients who suffer from different kinds of amnesia and illustrate double dissociations. So, for example, the patient H.M., who was studied by Milner and her colleagues after bilateral removal of his hippocampus and paralimbic cortex, suffered from anterograde amnesia. He could not store information in long-term memory for events happening after the surgery. However, H.M.'s working memory seemed quite normal. By contrast, K.F., a patient who suffered head injury and was examined by Warrington and colleagues, has a severely impaired working memory with a long-term memory near normal (see e.g., Meunier, M. Bachevalier, J. & Mishkin, M. [11]). 

            To see the relevance of e.g., the distinction between long-term and short-term memory upon thinking, consider solving such tasks as the so-called Towers of Hanoi problem or problems about spatial relations. In the Towers of Hanoi problem, you are faced with three pegs. On the leftmost peg, there are three disks of different diameters stacked on top of one another with the larger of the three at the base of the peg. The problem is to get the disks stacked in the same order on the rightmost peg. There are two rules: first, only one disk may be moved at a time from one peg to another. Second, a larger disk may never be stacked on top of a smaller one (see e.g., Rips [12]). In a reasoning task about spatial relations, one is giving the following kind of information: A is on the right of B; C is on the left of B; D is in front of C; E is front of B. And then one is asked: Where is D with respect to E? It turns out that D is on the left of E (see e.g., Jonides [13]).   

            When we go on solving such tasks or problems, presumably some of our thinking is propositional. Some of it is imagistic, iconic or analogical. Mental imagery and visual perception share, it seems (as noted e.g., by Jeannerod [14]), some of their neural resources in human brains. But in order to solve such reasoning tasks, some logical information must be stored in long-term memory and some information must be available in short-term or working memory. Baddeley [10] suggests that the duality between propositional (or digital) and analogical (or imagistic) representations involved in problem solving be accounted for in terms of a hierarchical model of working memory in which a central executive would monitor information provided by two slave systems: a phonological loop holding acoustic information and a visuospatial sketchpad manipulating visual and spatial images. 

            Of course, in stating the above reasoning tasks, I have relied on language. Arguably, an animal with no language might be able to solve such problems. But using language may well help some of us in solving such tasks. Not only do humans construct their representation of their surroundings from perception and memory; they also rely on inference and communication. Presumably like many other animals, we acquire new beliefs from older beliefs by inference. But we also have language. To borrow Steve Pinker's [15] famous phrase, humans have a language instinct. As emphasized by Chomsky [16], this instinct is what allows any normal human child to acquire knowledge of the grammar of her native tongue on the basis of her linguistic experience. And it allows human adults to produce and understand many many different sentences the meanings of which depend on the meanings of the constituent words and the complex grammatical order between them. Notice that any language contains a finite number of words. But thanks to the recursive rules of grammar, a language may contain a potentially infinite set of sentences. Knowing the grammar of a language is what allows a human being to produce or understand any sentence of his language. Thanks to language humans can also derive their representation of their environment from verbal communication with their conspecifics. So our representation of our environment derives from four basic sources: perception, memory, inference and communication.

 

2. Metarepresentations and human communication

            Without a semantic memory in which to store the meanings of several thousand words, a human being could not speak because he could not entertain the meanings of sentences of his language. But in fact with the ability to speak a public language comes so to speak an "external memory". Representations of sentences of a public language can be stored in memory and they are tools for remembering and learning. In fact, one typical kind of human learning is the acquisition of some explicitly entertained piece of knowledge. And this kind of learning depends, I now want to argue, on one special feature of human cognition which is heavily involved in human communication, i.e., the fact that human cognition is distinctly metacognitive. As this conference amply testifies, human beings have the ability to reflect about their cognitive apparatus. Many different creatures having a memory may be hit by various forms of amnesia. But I take it only human beings worry about the reliability of their memory and about their memory impairments. This ability to evaluate one's own memory capacities is often called metamemory. (For empirical work on humans' metacognitive evaluation of their own cognitive abilities including memory, see Metcalfe & Shimamura [17]).

            Humans can worry about the reliability of their memory impairments because in general they can form higher-order thoughts about thoughts, whether their own or that of others. Certainly, human beings spend a lot of time forming beliefs about beliefs, beliefs about desires, desires about beliefs, desires about desires, etc. In order to see what I call a metarepresentation, consider an utterance of the English sentence "snow is white". I assume that it is a linguistic (first-order) representation of a non-linguistic fact or state of affairs. Similarly, I assume that the thought expressed by such an utterance is a mental (first-order) representation of a non-mental fact or state of affairs. Next, consider the belief-ascription "John believes that snow is white". I assume that it is a higher-order representation of a belief, i.e., John's belief. Now, since I assume that John's belief, as characterized by the embedded English sentence "snow is white", is a mental first-order representation of the fact that snow is white, I claim that the belief-ascription "John believes that snow is white" ought to be analyzed as a metarepresentation of John's own representation of the color of the snow.

            I would like to make two points about the notion of metarepresentation involved in saying that belief-ascriptions are metarepresentations. First, since Tarski, it has been part and parcel of the logical tradition to distinguish between an object-language and its meta-language. In this logical tradition, and in order to deal with the semantic paradoxes, the truth-predicate is held to be a metalinguistic predicate, not an object-language predicate. The object-language contains words like "snow" and "white" which are used to express propositions about the color of snow. The metalanguage does, but the object-language does not, contain the predicate "true" which applies to sentences of the object-language. So in order to say of the object-language sentence "snow is white" that it is true, we need to ascend from the object-language to the meta-language. Similarly, I suggest, moving from "snow is white" to "John believes that snow is white" is ascending from a linguistic representation of a fact to a metarepresentation, i.e., a higher-order representation of the representation of the same fact.

            Secondly, drawing on the work of Rosenthal [18] who argues for a "higher-order thought theory of conscious mental states" and following Dretske [3] and Perner [19], I will call metarepresentations higher-order representations of representations which display the content of the representation metarepresented. There may be many different representations of one and the same representation. Representations may be mental or not. An utterance, for example, which is a linguistic (non-mental) representation, may be printed on a page. If so, it may have various physical and chemical properties. A description revealing the chemical composition of the ink with which the utterance was printed is a representation of a representation. But it does not display the content of the utterance. So it is not a metarepresentation. Similarly, assuming - as I do - that an individual's mental representation is a brain state of the individual, an image of blood flow in the brain obtained by positron emission tomography (PET) techniques may well be a representation of a mental representation. But if it does not display its content, then it is not a metarepresentation. Only those representations of a representation which display the latter's content deserve to be called "metarepresentations".

            Perhaps, as Premack & Woodruff [20] showed us a few years ago, some non human primates have some embryonic metarepresentational ability too. But the evidence is still controversial. In any case, this ability is deeply at work in ordinary human interactions: in verbal and non-verbal communication, the task of the addressee is always to determine the communicative intention of the person on the other end of the chain. To see this, imagine the plausible situation in which I'm sitting in a café in front of a woman. After a few minutes I may justifiably or not come to form a belief with the following internal structure:

 

[she wants [me to believe [that she does not know [that I am looking at her]]]]

 

I might come to entertain a thought with such a content as a result of non-verbal communication. Notice that this innocent-looking thought is a metarepresentation of the woman's desire and it has four levels of complexity, as indicated by the relevant brackets. Suppose now that Mary, an English speaker, utters the sentence "It is hot in here". She thereby expresses a proposition about temperature. Plausibly, from my understanding of what the speaker said, I may reach a conclusion with the following structure:

 

[Mary intends [me to believe [that she wants [me to open the window]]]]

 

In verbal communication, the content conveyed by an utterance is always richer than the linguistic meaning of the sentence actually uttered. The sentence uttered by Mary does not contain either the noun "window" or the verb "to open". So Mary did not tell me in so many words that she wanted me to open the window. Rather, this is something which she implicitly conveyed to me by means of her utterance. That she wants me to open the window is something which I infer from what she said explicitly together with cues from the context of utterance. Again, my conclusion is a metarepresentation of Mary's intention and, as indicated by the relevant brackets, it has four levels of complexity.

            Since intentions are thoughts, forming a thought about a communicative intention is forming a higher-order thought about a thought. As Grice [21] and other pragmatists such as Sperber & Wilson [22] have observed, communication is at bottom a matter of determining an intention, i.e., a mental state. In verbal communication, language provides evidence of the speaker's state of mind. As Nicholas Humphrey [23] has put it, human beings are born psychologists. They are mindreaders, to use the phrase of Simon Baron-Cohen [24]. Indeed, there is a fast growing body of psychological literature devoted to the study of the ontogenetic development of the human ability to ascribe mental states to one's conspecifics and to some possible phylogenetic precursors of this ability in non human species (see e.g., Astington, Harris & Olson [25], Whiten [26], Carruthers & Smith [27]). According to some cognitive psychologists, just as they are endowed with a language instinct, humans are also uniquely endowed with a special purpose cognitive capacity which Leslie [28] calls ToMM (for theory of mind mechanism) and which underlies their ability to perceive, memorize and conceptualize the actions of their conspecifics. There is now some evidence suggesting that autism - which Baron-Cohen [24] calls mindblindness - might result from damage to this mindreading ability.

            Let's agree that one distinctive feature of human cognition is that humans have linguistic and metarepresentational capacities. Some philosophers are unwilling to attribute concepts, thoughts, cognitive maps and beliefs to non human creatures who don't have language. I am not. Insofar as they are able to categorize, make inferences and learn, many non human animals may well have thoughts and beliefs. What I do claim with Dennett [29], however, is that only of humans is it true uncontroversially that they have the concept of representation. Only they have the concept of belief, the concept of thought or the concept of a cognitive map. Only they can keep a diary, draw non-mental maps, take notes about their thoughts and those of others and memorize thoughts about thoughts. And with such higer-order concepts come the concepts of truth and falsity. Truth and falsity are properties of representations with propositional content. Representations with propositional content can be either linguistic or mental. It is then no coincidence if humans have both linguistic and metarepresentational capacities.

            There is of course a striking correlation between the advent of metarepresentational abilities and the emergence of a language instinct. It is, however, by no means clear that the latter caused the former or that the latter was a necessary condition for the former. On the picture of communication which derives from Grice [21] and which has been developed by Sperber & Wilson [22], in human communication, the task of the addressee is to determine the other person's intention. Language is a mere tool which allows the transmission of more specific intentions. From an evolutionary point of view then, it may well be that only creatures with the ability to think about thoughts, i.e., with metarepresentational capacities, were able to develop linguistic skills.  

 

3. The role of metarepresentation in human learning

            It is quite obvious why learning depends on memory. It is less obvious perhaps why human learning depends on metacognitive abilities. As Dan Sperber [30] has put it, metarepresentation is to human cognition what echolocation is to the bat. A creature with metarepresentational abilities will be able to store in her memory not only representations of states of affairs but higher-order representations of representations. This, I think, has two far reaching consequences. On the one hand, it will vastly enrich the kinds of attitudes she can take with respect to the information stored in her memory system. On the other hand, it will allow her to store in memory representations whose content she does not quite understand. Let me explain.

            First, it expands the set of attitudes a creature can take with respect to the information stored in her memory. I assume that the architecture of a human memory system must contain information about at least two kinds of representational states: belief-like (or doxastic) states which reflect states of the world and desire-like (volitional or goal like) states for which, as John Searle [31] says, the direction of fit is mind-to-world, not world-to-mind. As I already noticed, with the advent of metarepresentational abilities comes the possibility of forming desires about desires, desires about intentions, intentions about desires and so forth. I now want to reflect on the expansion of belief-like states.

            Consider the simple operation whereby we form beliefs about somebody else's beliefs. I may for instance have stored in memory the belief that so-and-so believes that witches have magical powers. I can do so without believing that witches exist, let alone that they have magical powers. I may have this higher-order belief about somebody else's belief in my memory without myself accepting the other person's belief or even without accepting the ontological commitments of the other person's belief. This is metarepresentation without acceptance. And of course, it is extremely important in scientific controversies, as when, for example, two physicists may disagree about whether e.g., atoms exist. Furthermore, belief, which itself comes in degrees, can now give way to such states as wondering or doubting. I may wonder or doubt whether such and such a hypothesis is true without committing myself to its truth. I may suspend judgment until, as we say, more evidence comes in. Suspension of judgment, which is so important in science, is made possible by the arrival of metarepresentational capacities. In order to suspend judgment on a hypothesis about electrons, I must be able to consider explicitly my hypothesis as a representation of electrons.      

            Second, metacognition allows any of us to memorize representations whose propositional contents we don't quite understand. Existing computers can only store in their memory buffer strings of symbols which they can "understand" or at least process according to inflexible syntactic and semantic rules. In this respect, we are not like computers. There are things we believe without knowing exactly what it is that we believe. Here, I think, the concept of truth plays a crucial role. As children and as grown ups as well, we are constantly bombarded with utterances whose exact contents escape us but whose sources we trust because we take them to be reliable, knowledgeable or authoritative. Understanding the content of an utterance might escape me either because I don't have the necessary conceptual resources or I simply failed to hear what someone said. However, I may believe with good reasons that the speaker is reliable. If so, I may have good reasons to believe that what she said is true.

            Consider four examples of representations which I may accept as parts of metarepresentations merely on the basis of my confidence in their respective source:

 

(1) The teacher says that the set of real numbers is larger than the set of integers.

(2) Mummy believes that racism is bad.

(3) According to Quine, to be is to be the value of a variable.

(4) Copernicus has established that the Earth revolves around the Sun.

 

In each of (1)-(4), a given proposition appears as part of a broader structure. In (1), the proposition that the set of real numbers is larger than the set of integers is entertained as the content of something said by the teacher. In (2), the proposition that racism is bad appears as the content of Mummy's belief. In (3), the proposition that to be is to be the value of a variable is a view attributed to Quine. In (4), the proposition that the Earth revolves around the Sun is entertained as something for which Copernicus provided conclusive evidence. The English sentences expressing such propositions, which are syntactic parts of some larger sentence, are often called "that"-clauses. They are used to characterize the content of some person's belief, view or doctrine. I shall say that propositions expressed by "that"-clauses in (1)-(4) are metarepresented.

            The hypothesis I now want to consider is that there is a stage in human learning in which propositions which are expressed by "that"-clauses as parts of (1)-(4) can only be metarepresented. In other words, there is a stage in which some conceptual representations can only be entertained as means of characterizing the content of some larger higher-order representation. They cannot be entertained as such or in isolation. Consider a child who first hears about real numbers. She just learned that the set of integers is an infinite set. So, whatever real numbers are, she does not quite understand how an infinite set, such as the set of integers, could be smaller than the set of real numbers. Strictly speaking, she does not understand which proposition is expressed by the utterance "the set of real numbers is larger than the set of integers". How could any set be larger than an infinite set? However, she may accept (1) merely on the basis of her confidence in her teacher and then try to make sense of her teacher's claim. Similarly, Quine is famously known for holding the view which is ascribed to him in (3). It may well be rational of a first-year philosophy student who does not quite grasp the point of Quine's doctrine to store (3) in her long-term memory for later consideration on the basis of the assumption that perhaps Quine's fame is not undeserved after all.

            Suppose someone I trust - Mary - utters a sentence whose meaning I don't  understand for whatever reason. Still, I can identify the proposition expressed as "what Mary said" or "what Mary thinks". Now, what possession of the concept of truth can do for me is to allow me to construct a full proposition of the form "What Mary said is true" or "What Mary thinks is true". Note that I can now believe the content of the proposition that what Mary said is true even though I don't know what Mary said. I form the belief that what Mary said is true on the basis of my confidence in Mary's judgment or thinking.

            Similarly, a child may hear her school-teacher mention Pythagoras' theorem. Suppose she does not know or understand what Pythagoras' theorem is. "Pythagoras' theorem" just names or refers to a proposition. On the basis of her confidence in her school-teacher, the child may store in her memory the belief that Pythagoras' theorem is true. In order to have such a belief in her memory, let's suppose that the child must know that "Pythagoras' theorem" is a definite description referring to some provable proposition of Euclidian geometry. She may know this without knowing which true proposition that is. First, the words "Pythagoras' theorem" might serve to tag an empty file in her memory. Later perhaps, she will add to her file a bunch of symbols expressing the content of Pythagoras' theorem. And later still, she will acquire the concepts expressed by the symbols and be able to identify the proposition which is Pythagoras' theorem. Then and only then, when she can identify which true proposition is referred to by the description "Pythagoras' theorem", will she be able to relate it to her knowledge of other propositions of Euclidian geometry and use directly her belief that a2 + b2 = c2 in her own geometrical reasoning.

 

Conclusions

            Let me conclude with two speculations. First, there are two relevant differences between those representations which can be directly stored in an individual's long-term memory and those which an individual enters into his memory in a metarepresentational format. On the one hand, the former can be (or could be) directly formed by perception. Whereas the latter are intrinsically dependent on verbal communication. This applies in particular to representations which are entertained as a result of explicit learning, as in science. You could not e.g., entertain the belief that the set of real numbers is larger than the set of integers as a result of perception. Nor could you entertain Quine's doctrine that to be is to be the value of a variable as a result of perception. True, if you learn in Paris what the weather is like in Boston from a phone conversation, then your belief derives from communication. But had you been in Boston then, you could have derived your belief from perception. Furthermore, the above difference between two channels - perception and verbal communication - via which the two sorts of representations can reach an individual's long-term memory system is correlated with a difference in ease of understanding. The contents of representations which an individual could entertain as a result of perception are by and large easier to grasp than the contents of representations which an individual can only entertain as a result of verbal communication.

            On the other hand, many of our belief-like representations of our environment which arise from perception and are stored directly in memory must be harnessed to our motor program to guide our actions. Since, however, we may store in memory many of our belief-like representations in a metarepresentational format without taking them to be representations of our environment, there must exist inhibitory mechanisms severing the connections between the memory system containing representations metarepresented and our motor program. So one question we might ask is whether long-term memory in human beings is not partitioned into two separate compartments one of which is devoted to the storage of representations which can be directly stored in memory and the other of which is devoted to the storage of representations which must be metarepresented in order to reach an individual's memory system.

            Second, I mentioned earlier the hypothesis that autism might consist primarily in the impairment of a metarepresentational cognitive module which Leslie calls ToMM (theory of mind mechanism). Now, there is another syndrome called Williams' syndrome which seems like a mirror-image of autism. In particular, as reported by Carey (cf. Medin & Ross [32]), children with Williams' syndrome seem capable of good linguistic performance; they seem to have an intact theory of mind: they are extremely sociable. However, their conceptual development is severely limited. If so, this would suggest that the ability to store in memory metarepresentations should in fact be partitioned into at least two sorts of memory system: one system might be dedicated to store psychological representations of intentions, beliefs and desires of conspecifics as a basis of human social interactions. Another memory system would be dedicated to the storage of non-psychological representations, such as representations of the physical and biological world, numerical representations and representations of geometry.

 

 

 

References

 

1. Dretske, F. 1981. Knowledge and th Flow of Information. MIT Press, Cambridge, Mass.

2. Dretske, F. 1988. Explaining Behavior. MIT Press, Cambridge, Mass.

3. Dretske, F. 1995. Naturalizing the Mind. MIT Press, Cambridge, Mass.

 

4. Fodor, J.A. 1987. Psychosemantics. The Problem of Meaning in the Philosophy of Mind. MIT Press, Cambridge, Mass.

 

5. Fodor, J.A. 1990. A Theory of Content and Other Essays. MIT Press, Cambridge, Mass.

 

6. Millikan, R.G. 1984. Language, Thought and Other Biological Categories. MIT Press, Cambridge, Mass.

 

7. Millikan, R.G. 1993. White Queen Psychology and Other Essays for Alice. MIT Press, Cambridge, Mass.

 

8. Jacob, P. 1997. What Minds Can Do. Cambridge University Press.

 

9. Locke, J. 1690. An Essay Concerning Human Understanding (Cranston, M. ed.), MacMillan, 1965.

 

10. Baddeley, A.D. 1986. Working Memory. Oxford University Press, Oxford.

 

11. Meunier, M., Bachevalier, J. & Mishkin, M. 1994. L'anatomie de la mémoire. La Recherche. 267, 760-66.

 

12. Rips, L.J. 1995. Deduction and Cognition. In: Thinking. An Invitation to Cognitive Science (Smith, E.E. & Osherson, D.N. eds.), MIT Press, Cambridge, Mass.

 

13. Jonides, J. 1995. Working Memory and Thinking. In: Thinking. An Invitation to Cognitive Science (Smith, E.E. & Osherson, D.N. eds.), MIT Press, Cambridge, Mass.

 

14. Jeannerod, M. 1994. The Representing Brain: Neural Correlates of Motor Intention and Imagery. The Behavioral and Brain Sciences 17, 2, 187-245.

15. Pinker, S. 1994. The Language Instinct.. Penguin, London.

 

16. Chomsky, N. 1980 Rules and Representations. Columbia University Press, New York.

 

17. Metcalfe, J. & Shimamura, A.P. eds. 1994. Metacognition: Knowing about Knowning. MIT Press, Cambridge, Mass.

 

18. Rosenthal, D. 1986. Two Concepts of Consciousness. Philosophical Studies 94, 3, 329-59.

 

19. Perner, J. 1991. Understanding the Representational Theory of Mind. MIT Press, Cambridge, Mass.

 

20. Premack, D. & Woodruff, G. 1978. Does the Chimpanzee have a Theory of Mind? The Behavioral and Brain Sciences 4, 515-26.

 

21. Grice, P. 1989. Studies in the Way of Words, Harvard University Press, Cambridge, Mass.

 

22. Sperber, D. & Wilson, D. 1986. Relevance, Communication and Cognition. Harvard University Press, Cambridge, Mass.

23. Humphrey, N. 1984. Consciousness Regained. Oxford University Press, Oxford.

 

24. Baron-Cohen, S. 1995. Mindblindness. An Essay on Autism and Theory of Mind. MIT Press, Cambridge, Mass.

 

25. Astington, J.W., Harris, P.L. & Olson, D.L. (eds.), 1988. Developing Theories of Mind. Cambridge University Press, Cambridge.

 

26. Whiten, A. (ed.), 1991. Natural Theories of Mind: Evolution, Development and Simulation of Everyday Mind Reading. Blackwell, Oxford.

 

27. Carruthers, P. & Smith, P.K. (eds.), 1996. Theories of Theories of Mind. Cambridge University Press, Cambridge. 

 

28. Leslie, A. 1994. Pretending and Believing: Issues in the theory of ToMM. Cognition 50, 211-38.

 

29. Dennett, D.C. 1996. Kinds of Minds. Basic Books, New York.

 

30. Sperber, D. 1997. Intuitive and Reflective Beliefs. Mind and Language 12, 1, 67-83.

 

31. Searle, J. 1983. Intentionality. An Essay in the Philosophy of Mind. Cambridge University Press, Cambridge.

 

32. Medin, D.L. & Ross, B.H. 1996. Cognitive Psychology. Second edition, Harcourt Brace College Pub., Fort Worth.