State Consciousness Revisited[1]

 

 

Pierre Jacob

EP100, CNRS, France.

 

 

 

 

My goal in this paper is to defend the so-called "higher-order thought" theory of conscious mental states, which has been presented in various places by Rosenthal (1986, 1990, 1993, 1994), from a pair of objections recently advanced by Dretske (1993; 1995). According to the version of the "higher-order thought" (henceforth HOT) theory of conscious states which I have in mind, none of my mental states will be a conscious state unless I am conscious of it. The intuition behind this view - which I find appealing - is that a mental state of which a person is completely unaware counts as a non-conscious (or unconscious) mental state. I think that some of the intuitions underlying Dretske's views can be reconciled with an amended version of the HOT theory. In particular, I will recommend the incorporation into the HOT theory of the concept of a state of consciousness intermediary between the concept of creature consciousness and the concept of state consciousness (or the notion of a conscious state).[2] Before, however, I defend the amended version of the HOT theory of conscious states against Dretske's attack, I want to say a word of the representationalist approach to consciousness according to which some of the mysteries of consciousness might be unravelled by a prior account of intentionality.

 

1. The representationalist strategy

It is a commonplace in philosophy of mind that human minds seem to be inhabited by two sorts of states having two characteristic features: propositional attitudes and qualia, sensations or conscious experiences. Propositional attitudes are typically states having content or intentionality. Qualia are states supposed to have some subjective perhaps intrinsic subjective quality. A creature's undergoing conscious experiences - his, her or its experiencing qualia - has been widely held - at least since Nagel (1974) - to be constitutive of what it is like to be that creature. On my understanding of consciousness, what it is like to be a certain creature depends (or is a function of) what it is like to be in various possible experiential states or to have various possible sensory experiences. What it is to be a given creature is, if you like, defined by a set or a spectrum of possible sensory experiences. Furthermore, I see no serious difference between what it is like to be in a state - to undergo a sensory experience - and what Block (1990; 1994) calls phenomenal consciousness.[3]

I want to start this paper by making a confession. I think I understand some of the problems raised by intentionality; I find, however, the so-called problems of consciousness much more obscure. Not only do I find the problems of intentionality somewhat more tractable than the problems of consciousness, but I also think that intentionality is the more fundamental of the two features of mental states. So my inclination is to try and derive some understanding - however feeble - of the problems raised by consciousness from a prior account of intentionality. I will call this strategy the representationalist strategy. Accepting this strategy puts me, I suppose, in the same bandwagon as other representationalists such as Dretske (1993; 1995) and Dennett (1994) who writes:

 

As the title of my first book, Content and Consciousness (1969) suggested, that is the order in which they must be addressed: first, a theory of content or INTENTIONALITY - a phenomenon more fundamental than consciousness - and then, building on that foundation, a theory of consciousness.

 

It seems to me quite uncontroversial that an individual's propositional attitudes can be unconscious. Unlike an individual's propositional attitudes, however, it is not clear whether an individual's sensory experiences may be unconscious. One reason, therefore, why I find the issues of consciousness so perplexing is that it is not obvious whether the property of a creature's quale in virtue of which there is something it is like to be this creature should be thought of as consciousness or as a sensory property or quality which a state might have independently of being a conscious state. I think I understand what it is for a conscious experience to have a sensory quality. Seeing a red rose, smelling a perfume, tasting a wine, hearing the sound of a violin are all states with distinct sensory qualities. Are these sensory properties features of so-called phenomenal consciousness? Can such states have their distinctive sensory property and not be conscious? This I find a difficult issue to which I will try to provide a rather simple answer, based on the notion of state of consciousness (to be distinguished from both the notion of state consciousness and that of creature consciousness).

Although I would certainly not claim that the features of a mental state which make it an intentional state are cristal clear and perfectly well defined, there is, however, a motley of properties which can be said to constitute intentionality. To say of an individual's belief that it possesses intentionality is at least to say the following: that the individual's belief is about some state of affairs in the individual's environment - about e.g., the fact that some object a from the environment possesses property F. The belief is a representation of the fact that a is F. Now, to say that much is at least to say three things. It is to say first that beliefs have a high degree of intensionality or referential opacity. For example, even though water is necessarily H2O, I can believe that the glass in front of me contains water without believing that it contains a liquid composed of H2O molecules. Secondly, if a exists and if it is F, then the individual's belief that a is F is true. Otherwise - if a is not F -, it is either false or perhaps - if object a does not exist -, then the belief is neither true nor false. So mental states having intentionality, such as beliefs, have semantic properties. A state with intentionality (or semantic property) can be true or false. Thirdly, not only are beliefs about real existing states of affairs; they can also be about possible and even impossible states of affairs. For example, I can believe that the greatest integer is a prime number; I can have the desire to ride a unicorn. Perhaps a belief about an impossible state of affairs should be said to be neither true nor false.

I am willing to take a realist standpoint on intentionality and assume that the semantic properties of an individual's propositional attitudes are genuine properties of the individual's brain. I take it that the burden of realism in this area is twofold: first, it is incumbent upon an intentional realist like me to show how the semantic properties of an individual's propositional attitudes can be derived from non-intentional properties and relations of the individual's brain (or mind). This is the task of naturalizing intentionality. Second, it is incumbent upon an intentional realist to show that the semantic properties of an individual's propositional attitudes make a causal difference or a causal contribution, i.e., that they are causally efficacious in the production of the individual's intentional behavior. I shall say no more about the causal efficacy of intentionality here. My own strategy towards naturalizing intentionality, which owes a great deal to the work of Dretske (1981; 1988), is an informationally based teleosemantic approach. The representationalist strategy I favor is therefore a two-step strategy: first, try and derive the semantic properties of an individual's mind from non-intentional properties and relations of the individual's mind. Then, try and derive features of consciousness from intentionality. A good example of what I call the representationalist strategy is provided, it seems to me, by the following quote from Evans (1982: 158):

 

... although it is true that our intuitive concept [of conscious experience] requires a subject of experience to have thoughts, it is not thoughts about the experience that matter, but thoughts about the world. In other words, we arrive at conscious perceptual experience when sensory input is not only connected to behavioral dispositions... - perhaps in some phylogenetically more ancient part of the brain - but also serves as the input to a thinking, concept-applying and reasoning system; so the subject's thoughts, plans, and deliberations are also systematically dependent on the informational properties of the input.

 

I want to interpret this passage as suggesting that only if the information which is the (non-conceptual) content of a creature's perceptual or sensory state can be fed into a conceptual kind of representation can the perceptual informational state count as a conscious experience. A radical version of the position I am attributing to Evans here would be: only if a creature has the conceptual ability to form belief states can her perceptual information-processing state be counted as a conscious experience. Only if she can form beliefs can she have conscious experiences. In Dretske's (1981) terms, unless the information analogically coded by an information-carrying state can be digitalized (recoded into digital form), the state carrying the analogically coded information cannot count as a conscious experience. Unless the information is available for a process of digital recoding (or digitalization), the information-carrying state will not qualify as a conscious experience. If this were true, and assuming - as I do - that information is a crucial ingredient of intentionality, then this would provide a rationale for the representationalist strategy according to which an understanding of consciousness ought to derive from an understanding of intentionality.

What I here call the representationalist strategy, therefore, assumes or presupposes that consciousness is not the criterion of the mental. Mental states may, as I suggested above, be either intentional states (states with propositional content) or sensory states (states with sensory properties). Some states with intentional content (or semantic properties) may be conscious; others may be unconscious. Although it is more controversial, I am going to assume that sensory states too can be unconscious. In other words, I am going to assume that the sensory property of a mental state having such a property and its property of being conscious are two distinct properties and that the state may have one independently of the other.

According to one alternative anti-representationalist strategy, which perhaps may be linked to the Cartesian legacy, consciousness is constitutive of the mental. Perhaps we might distinguish two versions of the Cartesian legacy: a strong one and a weaker one. According to the stronger one, no state can be mental - whether the state has intentional content or some sensory property - unless it is conscious. This is a strong view for it precludes beliefs and desires from being unconscious. A weaker version of the Cartesian tradition might claim that only sensory states - only states having sensory properties - must be conscious states. Propositional attitudes, on the weaker construal of the Cartesian tradition, may be unconscious. One strong anti-representationalist version of the Cartesian tradition has been recently revivified by Searle (1992) who criticizes the representationalist strategy on two major grounds.

First, Searle rejects the approach to the naturalization of intentionality based on teleological ideas - on functions - because, on his view, all function-ascriptions are relative to conscious agents having propositional attitudes. Not only does the fact that an artefact has a function depend, on Searle's view, upon the propositional attitudes of the person who designed it or uses it, but the function of a biological organ too depends upon the propositional attitudes of a conscious agent. The function of a biological organ presumably depends upon the propositional attitudes of the biologist who is investigating the biological organ. I will not argue here against Searle's thesis of the priority of intentionality over biological functions, which, if accepted, would, I think, indeed undermine the strategy of a teleosemantic approach to the task of naturalizing intentionality by making the approach circular. I will merely register my disagreement with him on this score. Consider the worn claim that the function of the heart is to pump blood. Assume that pumping blood is something a normal heart can (and ought to) do: there is a causal relation between an organism's heart and blood circulation in this organism. Of course, a heart produces many other effects - such as making a thumping noise. In a nutshell, as the etiological theory of functions suggests, the function of a heart - i.e., pumping blood - is one among its many effects which has been singled out by a process of natural selection. Searle feels no inclination to assume that the causal relation between a cause and its effect presupposes intentionality. In other words, he assumes a metaphysical realist picture of the causal relation. Unlike him however, I do not think that the process of natural selection whereby a particular causal relation gets singled out presupposes intentionality any more than the causal relation does in the first place.

Secondly, by appealing to his famous Connection Principle, Searle wants to rule out the possibility that genuine intentional mental states be non-conscious states. According to the Connection Principle, all "intrinsically" intentional mental states must be potentially conscious or available to consciousness. He, therefore, wants to use the Connection Principle to justify his further thesis of the priority of consciousness over intentionality. Although I do not want to argue this in detail here, the reason I do not accept the Connection Principle - and the reason, why I, therefore, reject Searle's thesis of the priority of consciousness over intentionality - is that I suspect (as Block 1990 and Chomsky 1990) that, unless the notion of potential availability of a mental state to an individual's consciousness is further specified, the Connection Principle will remain vacuous or irrefutable.

My first reason for rejecting the Connection Principle is that it might be said to be trivially satisfied by the work of such cognitive scientists as Chomsky and Marr, who posit deeply unconscious mental states, and precisely against whose work presumably the Connection Principle is directed. Chomsky's and Marr's theories bring to the conscious awareness of some minds systems of rules, representations and computations which would otherwise remain unconscious. Should we then say that such unconscious rules, representations and computations, therefore, are potentially conscious in Searle's required sense? Searle would most certainly not want to count such rules, representations and computations as potentially conscious since the Connection Principle is explicitly designed to exclude such states and computations from the realm of the mental. The general problem is this: representations and computations, which are inaccessible to some creature's consciousness - e.g., our consciousness -, might nonetheless turn out to be consciously accessible to the mind of other better endowed creatures. Would that make the representations and computations potentially conscious? If not, why not? Presumably, the reason it would not is that Searle's intended version of the Connection Principle is that for an agent's state to be a genuine intentional state (as opposed to a mere neurophysiological state), the content of the state must be potentially accessible to the agent's conscious awareness at the moment when the state is causally efficacious in interacting with other states of the agent or in contributing to the agent's own intentional behavior. It is not enough that it be accessible to the consciousness of any third person observer's awareness, let alone to the consciousness of some member of another species.

But to see why this latter constraint will not be sufficient to protect the Connection Principle from vacuity, consider now the semantic facilitation obtained by Marcel (1983), where semantic information about a word seems to be extracted unconsciously in subliminal perception by a subject. Is the information-processing state of the subject whereby he or she unconsciously extracts semantic information about word-meaning potentially conscious? Had the word been presented slowly enough to the subject, the content of his or her experience would have been available to his or her conscious awareness. Nothing in the Connection Principle rules out this answer, it seems to me. And this is why I think this principle is vacuous.

I will henceforth assume the correctness of the representationalist strategy. I will now sketch the HOT theory of conscious states.

 

2. Creature consciousness, state consciousness and the HOT theory

As Rosenthal (1986, 1990, 1993, 1994) has made clear in a number of publications, it is useful to distinguish what he calls creature-consciousness from state consciousness. Furthermore, the notion of state consciousness is the more puzzling or the more problematic of the two notions. The strategy underlying the HOT theory of conscious states is, therefore, to account for the more mysterious notion - the notion of state consciousness - in terms of the less mysterious notion - the notion of creature-consciousness.

There are two complementary ways in which a creature may be said to be conscious. First, creature-consciousness is, as Rosenthal says, a biological phenomenon consisting in the fact that the creature is awake or is not unconscious. In other words, a creature is conscious if she is normally responsive to ongoing stimuli. Creature-consciousness is this sense is, as Rosenthal calls it, intransitive. It is, furthermore, a property a creature can lose and regain periodically. A creature can lose it by falling asleep, by being knocked out in various ways, by being drugged, by being comatose, and so on. It can regain it by waking up.

Secondly, a creature can be conscious of things, properties and relations in his or her environment. Rosenthal calls the latter consciousness transitive creature-consciousness. Unlike the non-transitive notion of creature-consciousness, which is not distinctly mental, I take the notion of transitive creature-consciousness to be distinctly mental. Whether or not a person may be unconscious in the non-transitive sense and still be conscious of something (as in dreams), I will leave open. What seems to me unproblematic is that if a person is conscious of something, i.e., if he or she is transitively conscious of something, then he or she is non-transitively conscious. A person may be visually conscious of the red rose across the window; or she may be conscious of the perfume of the woman next to her; or she may be conscious of the sound of a violin; or she may be conscious of the taste of a strawberry in her mouth. In any of these cases in which the person is transitively creature conscious of various things and properties in her environment, then, the person is also non-transitively creature conscious.

Consider now the notion of a conscious mental state. A conscious mental state may be either a propositional attitude having intentional content or a sensory experience having a sensory property. To say of a mental state that it is conscious is obviously not the same thing as saying of a creature whose state it is that she is conscious. Neither is it to say that she is non-transitively conscious, nor that she is transitively concious of something. One may assume that a person is transitively conscious in virtue of being in some state or other: for example, I am conscious of my lap-top in front me in virtue of perceiving it. But I am conscious of my lap-top in virtue of a great many states and processes occurring within me at a subpersonal level. There is no a priori reason why any of the states occurring within me in virtue of which I am conscious of my lap-top when I perceive it visually must be a conscious state. It might be that none of my states which are necessary for me to be conscious of my lap-top is a conscious state. That a state or process is a necessary condition for a person to be transitively conscious of something does not make the state or process conscious.

First, when one of my state is a conscious state, unlike my being conscious, consciousness is a property of a state; not a property of the creature whose state it is. Second, state consciousness is intransitive. I take the crux of the HOT theory of conscious states to be the view that for one of my mental states to be conscious, I have to be creature conscious of it. This seems to me both simple and correct:

 

(1) If a creature is completely unaware of one of her mental states, then the state in question is unconscious.

 

Again, the mental state can be either a propositional attitude with intentionality or an experience with a sensory property. From the truth of conditional (1), it follows that

 

(2) If a creature's mental state is conscious, then the creature must be somehow conscious of it.

 

We can readily see why (2) states a necessary condition on state consciousness without, however, stating a sufficient condition. Imagine a psychoanalytical situation, and suppose that, just like everybody else, I have the desire to kill my father and I am unaware of my desire. I then have a repressed unconscious desire to kill my father. Suppose the psychoanalyst now tells me of my unconscious desire. Suppose further that I have the greatest ideological respect for psychoanalysis and my psychoanalyst is my guru. So I believe everything she tells me. So I now believe that I have the desire to kill my father; but I still fail to feel any conscious urge to kill my father. Then, although I am now aware of my desire to kill my father, I have not been made aware in the appropriate way to make my desire a conscious state. I believe that I have the desire to kill my father because my psychoanalyst says I do and I believe her. But I have come to form the belief about my desire inferentially by means of communicating with my psychoanalyst and I assume that communication with my psychoanalyst, as with anybody else, is an inferential process. The way I must be conscious of my mental state (my desire) for it to be conscious is that I must be directly, non-inferentially conscious of it. Even though I do not know exactly how to specify to my satisfaction the appropriate notion of direct consciousness of a mental state, it seems to me clear that if I acquire inferentially the belief that I have a desire to kill my father by listening to my psychoanalyst in conjunction with my assumption that my psychoanalyst is an authority on my mental states, and I furthermore do not experience any urge to kill my father, then I am not directly and non-inferentially conscious of my desire to kill my father.

Going back to the notion of transitive creature-consciousness, there are, it seems to me, two broad ways a person can be directly or non-inferentially conscious of anything at all - of objects, events, states, properties, relations. First, a person may be conscious of something by seeing it, by smelling it, by hearing it, or by touching it - in a word, by perceiving it. So I may be conscious of the color of a rose by seeing it; I may be conscious of my wife's perfume by smelling it; I may be conscious of the sound of a cello by hearing it and so on. Secondly, a person may be conscious of something by thinking about it or by having a thought about it. I am convinced by Rosenthal's criticisms of the perceptual model of how a person may be conscious of one of his or her conscious states. According to the HOT theory of conscious states, as I understand it, a mental state - such as my desire to kill my father or my olfactive experience of my wife's perfume - is a conscious state of mine if I am directly and non-inferentially conscious of it in virtue of having a thought about it, not in virtue of experiencing it. When I am conscious of either my desire to kill my father or my olfactive experience in such a way that it makes sense to say that either my desire or my olfactive experience is a conscious state, then I have a higher-order thought about my state. I do not so to speak perceive my own state; I rather think about it. So the HOT theory of conscious states is committed to the claim that a person's state is conscious if the person entertains a higher-order thought - not a perceptual sensory state - about the first order state in an appropriate direct non-inferential way.

 

3. Adding the notion of state of consciousness to the HOT theory

I want presently to do four things. First, I want to register what seems to me a legitimate puzzlement about the HOT theory and start dispelling the puzzlement. Secondly, I want to relate a creature's higher-order thoughts involved in state consciousness to higher-order thoughts involved in thinking about other people's thoughts, which will lead me to relate the HOT theory to the idea that humans have general metarepresentational abilities. Thirdly, I want to distinguish the higher-order thoughts involved in state consciousness from genuine introspection. Last but not least, I want to show how the HOT theory can be amended to accommodate some of the intuitions which underly Block's distinction between phenomenal and access consciousness. In the process, I will, I think, reach a point on which I disagree with Rosenthal's interpretation of the HOT theory.

First, then, one might find it astonishing that being (creature transitively) conscious of a state may confer intransitive consciousness onto the state. The astonishment arises when we consider the fact that our being conscious of so many things, properties and relations in our environment presumably does not make these things, properties and relations conscious. My being conscious occasionally at night of the moon - by visually perceiving it - does not make the moon conscious; it does not confer consciousness onto it. So how could my being conscious of one of my mental states turn it into a conscious state either? I am not sure I can satisfactorily dissolve the puzzle here. I want to say two things.

One source of puzzlement might derive from the underlying assumption that the kind of intransitive consciousness characteristic of mental states is a kind of "intrinsic" property. So how could my being conscious of it give it this intrinsic quality? If that is a source of puzzlement, then the appropriate response is that intransitive state consciousness is not such an intrinsic quality. The view is not that my becoming conscious of my state prompted a change in the state. The view is not that the state was non-conscious and then acquired intransitive consciousness as a result of my becoming conscious of it. Rather, intransitive state consciousness just is - consists in - the relation which holds between the state and some higher-order thought of mine. The property of a state of being a conscious state is a relational, not an intrinsic property of the state: it is a matter of the position so to speak of the state in an individual's cognitive architecture.

Consequently, the second source of puzzlement is that the appropriate relation between the lower-order mental state and the higher-order thought is not a causal relation: the higher-order thought does not cause the lower-order state to become intransitively conscious or to acquire intransitive consciousness. Rather, what it is for the lower-order state to be intransitively conscious is to stand in relation to a higher-order thought. The relation between the higher-order thought and and the lower-order state is, if you like, constitutive, not causal.

Secondly, I assume, on the basis of much recent psychological research (as illustrated by papers conveniently collected in Astington et al. (ed.)(1988), in Baron-Cohen et al. (ed.)(1993) and in Whiten (ed.) (1991), and to which I will go back momentarily when I examine one of Dretske's criticisms against the HOT theory), that human beings have a metarepresentational ability which allows them to form representations about representations in the form of intentions about other people's intentions, beliefs about other people's beliefs, desires about other people's desires, intentions about other people's beliefs, intentions about people's desires, beliefs about other people's desires, desires about other people's beliefs and so on and so forth. This ability, I take it, is being studied by psychologists in the "theory of mind" paradigm who currently investigate its phylogenetic basis, its ontogenetic development in the human child, and some of its possible pathological alterations (as e.g., in autism).

Of course, not any representation of a representation will count as a metarepresentation. A metarepresentation of a representation must include a reference to the content - the semantic property - of the representation which is being metarepresented. This is typically what is being accomplished by ordinary belief-ascriptions such as "John believes that Montreal is north of New York". If one assumes, as I do, that John's belief is a mental representation of a state of affairs - the relation x is to the north of y holding of two cities, Montreal and New York -, then the belief-ascription will be a linguistic higher-order representation of a mental representation of a state of affairs. A belief-ascription is therefore a linguistic higher-order representation of John's belief. If one assumes that the "that"-clause in the belief-ascription involves a reference to the content of John's belief, then one can see how the belief-ascription can be said to be a metarepresentation of John's belief. Suppose we are token physicalists. So we assume that John's belief token is nothing but a token of a brain state of John's. Suppose we could, using magnetic resonance imaging techniques, obtain a representation of some of the physical properties of John's brain state token - which, by assumption, is no other than John's belief token. Then, we would have a representation of John's belief. But we would not thereby have a metarepresentation of John's belief, since the representation of John's belief state obtained by magnetic resonance imaging techniques would not, unlike the belief-ascription, contain a reference to the content of John's belief. A metarepresentation must display the representation metarepresented as a representation.

Presumably, there is a difference between my thoughts (e.g., beliefs) about your beliefs and my higher-order thoughts about my own mental states. When I say that I want to relate our ability to have higher-order thoughts about our own mental states - which, according to the HOT theory of conscious states, is at the root of state consciousness - with our metarepresentational ability to form propositional attitudes about other people's propositional attitudes, I do not want to delete all differences between thinking about one's own mental states and thinking about others' thoughts. I do think, however, that both higher-order thoughts about one's own mental states and thoughts about others' mental states are generated by an individual's metarepresentational abilities. However, what does distinguish my higher-order thoughts about your thoughts and my higher-order thoughts about my own mental states is that the latter, unlike the former, are direct and non-inferential, i.e., neither inferred nor based upon observation of behavior.

Thirdly, as Rosenthal insists, it is no objection to the HOT theory of conscious states to point out that when we are in a conscious mental state - either a belief or a state with some sensory (or phenomenal) property -, we are not usually aware of having in addition a higher-order thought. The reason it is not an objection to the HOT theory is that, although the HOT theory says that if a person's state is conscious, then the person has a higher-order thought about it, the HOT theory does not, however, require the higher-order thought to be conscious. Actually, according to the HOT theory, for some higher-order thought T1 about some lower-order mental state to be conscious, a person must form yet a higher-order thought T2 about T1. In Rosenthal's (1994: 16) words, "not having conscious HOTs does nothing to show that we do not have HOTs that are not conscious". The standard case of a person's conscious mental state is, therefore, the case in which the person has a non-inferentially formed higher-order thought of which he or she is not conscious. This is not introspection. Introspection is the case when a person's second-order thought itself is conscious: this happens when the person is having a third-order thought about the second-order thought (about a lower-order mental state). The person is then introspectively conscious when, by a process of deliberate attention, he or she is conscious of being conscious of having some mental state or other.

Finally, I want to consider the possibility of adding to our stock of notions the notion of a state of consciousness. By a state of consciousness, I mean to refer to a state a creature is in when she is creature-conscious. Now, according to Rosenthal (and as I said above), a creature can be intransitively or transitively conscious. If so, then a creature may be either in an intransitive state of consciousness - as when she is in pain - or in a transitive state of consciousness - as when she perceives something or thinks of something. Many of the states or processes necessary for a creature to be either intransitively or transitively conscious need not be themselves conscious states. Some of them will; but others will not. The notion of a state of consciousness will allow us to distinguish internal states of creatures which we do want to count as (creature-) conscious (e.g., various non-human animals and human babies) from internal states of creatures which we do not want to so count as (creature-) conscious (complex physical systems which process information without being conscious systems, such as photoelectric cells, thermostats or computers). It will allow us to distinguish internal states of conscious creatures from internal states of creatures devoid of creature consciousness without making the distinction dependent on the ability of conscious creatures to form higher-order thoughts about their lower-order internal states.

Now, I want to use this notion of a creature's state of consciousness to say how the enriched HOT theory of consciousness can accommodate some of the intuitions underlying Block's distinction between two kinds of consciousness: phenomenal consciousness and access consciousness. Phenomenal consciousness is what it is like to be conscious of things and properties. Access consciousness, as I understand it, is the property a state has if it is accessible for report and can guide rational action. Access consciousness is of course of the two the property of a state which fits most easily with the HOT theory of conscious states. Being the target (or the object) of a HOT, and given that the creature has language and reasoning capacities, a conscious state is then available for report and can serve as premises in reasoning - which makes it access conscious, in Block's (1990; 1994) sense. To be access conscious therefore is for a mental state to be the target of a HOT in a creature endowed with the appropriate cognitive capacities.

What about phenomenal consciousness? I do not want to prejudge the issue of whether propositional attitudes have any phenomenal property or not. I do not know whether there is anything it is like to have beliefs. Since there might well be something it is like to have desires, I want to remain open minded about this. What is clear, however, is that there is something it is like to be in sensory states or to have sensory states such as to smell a perfume, taste an apple or see a red rose. So when I am in such a sensory state and I am conscious of a perfume, the taste of an apple or the redness of a rose, then there is something it is like to be conscious of the smell of the perfume, the taste of the apple or the color of the rose. So I want to treat Block's notion of phenomenal consciousness as a property or feature of a state of consciousness, i.e., as a property of a state a creature is in when she is creature-conscious. Now, a creature may be intransitively conscious or she may be transitively conscious of something in her environment. If a creature is in pain for example, then she will be in a state of intransitive consciousness of pain in virtue of which there is something it is like to be in the particular pain she is in. Notice that, on this account, she need not be conscious of her state of pain to be in pain - i.e., a state of intransitive consciousness. If she is transitively conscious of a red rose in her environment (in virtue of visually perceiving a red rose in her environmment), then she will be in a sensory or perceptual visual state of transitive consciousness in virtue of which there is something it is like to visually experience a red rose. Notice that she need not be conscious of her perceptual or sensory state to enjoy her visual experience. In other words, the visual experience need not be a conscious state in the HOT theory sense - it need not be the target of a higher-order thought in order to count as a state of consciousness such that there is something it is like to be in that state. Phenomenal consciousness then is a property of a creature's states of consciousness which arises in the creature when he or she is being intransitively conscious or transitively conscious of things and properties.

Now, I want to register what I think is a disagreement with Rosenthal's own view. Importantly, on my view, it is not required that the state of consciousness - with a phenomenal property - in virtue of which a creature enjoys a sensory experience be itself a conscious state. Not all states of consciousness which have a phenomenal property need be conscious states. Only some states of consciousness in virtue of which there is something it is like to be in those states are conscious states. Those will be targets of the creature's higher-order thoughts. But as the following quote illustrates, Rosenthal does require what I call a creature's state of consciousness to be itself a conscious state for there to be anything it is like to be in the state in question:

 

When a sensory state is conscious, there is plainly something it is like for one to be in that state and hence conscious of some of its qualitative properties. But when a mental state is not conscious, we are not in any way conscious of being in that state. Since we are then not conscious of the state or any of its distinguishing properties, there will be nothing it is like for one to be in the state. State consciousness of sensory states does coincide with there being something it is like to be in the state... (Rosenthal 1993: 357-58).

 

One can of course accept the claim that when a sensory state is conscious, there is something it is like to be in that state. But from the fact, however, that when a sensory state is conscious, there is plainly something it is like for one to be in that state, it certainly does not follow that when a sensory state is not conscious - when the creature is not conscious of being in that state by virtue of having formed a HOT about it -, then there is nothing it is like to be in that state. The transition from the premiss to the conclusion can precisely be avoided by appealing to the notion of a state of consciousness (which is not a conscious state) and by the assumption that such a state can be the bearer of phenomenal properties or that there is something it is like to being in such a state of consciousness.

 

3. Dretske's two criticisms of the HOT theory

Dretske (1993) contains a sustained argument for the provocative view that "an experience might be conscious without anyone - including the person having it - being conscious of having it" (ibid.: 263). As Dretske concedes, this view sounds quite paradoxical. Now, if a person's mental state can be conscious while no one - not even the person whose state it is - is conscious of it, then the HOT theory is just wrong. If a person's state of which the person was unaware could be a conscious state, then intransitive state consciousness could not consist in the person's having a higher-order thought about her state. I want to do two things: on the one hand I want to reconstruct and criticize the argument which I think leads Dretske to espouse this view; on the other hand, I want to show how I think I can accommodate most of the intuitions which I share with Dretske by using my notion of a state of consciousness.

I think Dretske relies on two crucial assumptions which I will label [D1] and [D2] respectively:

 

[D1] If a person S sees (hears, etc.) x (or that p), then S is conscious of x (that p).

 

[D2] If a person S is conscious of x or that p, then S is in a conscious state of some sort.

 

I will examine how he puts them to use in the analysis of one of his interesting examples. Consider the difference between the two sets of shapes Alpha and Beta:

 

 

 

 

 

 

 

There is one difference between Alpha and Beta. Alpha contains a spot not contained in Beta. Call it Spot.

 

 

 

 

Dretske assumes that if you saw Alpha and Alpha contains Spot, then you saw Spot. If, furthermore, Spot is the difference between Alpha and Beta, then you saw the difference between Alpha and Beta. As Dretske emphasizes, in such an example, although you saw Spot and, therefore, the difference between Alpha and Beta, you might nonetheless fail to believe that Spot is a constituent (or a part) of Alpha. You might then fail to believe that there is any difference between Alpha and Beta. If you fail to believe that Spot is a constituent of Alpha and not a constituent of Beta - if you fail to believe that Alpha and Beta differ -, then you will fail to believe that Spot is the difference between Alpha and Beta. But, as Dretske (1969; 1979; 1981) has famously argued in a number of publications, you may fail to believe all of this and still may be said to have seen Spot and the difference between Alpha and Beta. The sense in which you may then be said to have seen Spot and the difference between Alpha and Beta is what Dretske (1969) called the "non-epistemic" sense of "see" and what he (1979) called "simple seeing". I'll call the sense in which you saw all the things Dretske says you saw without having any of the above beliefs "seen".

Dretske (1993) further distinguishes what he calls thing-awareness from what he calls fact-awareness. Thing-awareness is the counterpart of simple non-epistemic seeing. Fact-awareness is the counterpart of epistemic seeing. If you sawn Alpha and Beta, you are thing-aware of Alpha and Beta. Given that Spot is part of Alpha, as I already said, you cannot have seenn Alpha without seeingn Spot. So you are thing-aware of Spot. And given that you also sawn Beta, you are also thing-aware of the difference between Alpha and Beta. Given, however, that you failed to form the beliefs that Spot is a constituent of Alpha and that Spot is not a constituent of Beta, you therefore failed to acquire the belief that Spot is the difference between Alpha and Beta. Given that you failed to form the belief that Spot is the difference between Alpha and Beta, you are not aware of the fact that Spot is the difference between Alpha and Beta: you are not fact-aware of this fact. Let us now see how we can derive the conclusion that your experience of Spot can be a conscious experience of which you are not conscious.

By applying [D1], it follows that you are conscious of Spot. You are in effect thing-aware of Spot. You are thereby thing-aware of the difference between Alpha and Beta. Again, this does not make you fact-aware that Spot is the difference between Alpha and Beta: you are not aware that Alpha and Beta differ in that the former, unlike the latter, includes Spot. By applying [D2], you must be in a conscious state of some sort. It follows that the experiential state in virtue of which you sawn Spot is a conscious state: your experience of Spot is a conscious state. However, since you were not fact-aware that Spot is the difference between Alpha and Beta, you were not fact-aware of any difference between Alpha and Beta. A fortiori you were not fact-aware of any difference between your experience of Alpha and your experience of Beta. This is the sense in which your experience of Spot - i.e., your experience of the difference between Alpha and Beta - is a conscious experience of which you are not conscious: you are not fact-aware that you had the experience of Spot - or of the difference between Alpha and Beta.

I have two objections against the use of [D1] in the above argument, one against the application of [D1] to the particular case of being conscious of Spot, the other against the truth of [D1]. One can grant Dretske, I think, the importance of his "non-epistemic" or "simple" notion of seeing. But I think Dretske faces a dilemma which arises from his views about simple non-epistemic seeing.

I will start with my objection to the application of [D1] to the claim about the consciousness of Spot. If the intended relevant sense of "seeing" is the simple non-epistemic seeingn, then, it seems to me, from the fact that a person seesn Alpha and from the fact that Spot is a constituent of Alpha, it does not follow without auxiliary assumptions, that the person has seenn Spot. Suppose Alpha is composed of atoms or other elementary particles. From the fact that a person seesn Alpha, it does not follow that she has seenn all the atoms or elementary particles of which Alpha is composed. It might well be that if a person seesn Alpha, nothing much follows about her seeingn (or otherwise) components of Alpha. Certainly, if a person X seesn Alpha, whether or not X seesn all the elementary particules of which Alpha is composed, it does not follow that X is conscious of the elementary particles of which Alpha is composed. Suppose then that we grant Dretske that if X seesn Alpha and thereby seesn Spot by virtue of the fact that Spot is a constituent of Alpha, it does not follow that X is conscious of Spot.

Next, I am going to argue that Dretske faces a dilemma which arises from his views about simple non-epistemic seeing. The first horn of Dretske's dilemma is his comparison between simple non-epistemic seeing of an object and stepping on an object. Dretske (1969; 1979) has argued that in the simple or non-epistemic seeing of an object, a person need have no particular belief about the object that it instantiates any property. This is not to say that in simple non-epistemic seeing of an object, the person must lack any belief. It is to say simply that no belief is required for a person to seen an object. Dretske (1969, 1979) has even linked simple or non-epistemic seeing of an object to stepping on it: seeingn x, therefore, no more requires having beliefs about x than stepping on x does. I think I can accept Dretske's arguments for this view. This drives him to the view that:

 

[D3] If S seesn an object x, then there is no property F of x such that S must believe that x is F.

 

Unless I am mistaken, this amounts to the claim that there is no aspect of object x which S must be able to identify in order to have seenn x. So S may seen x and relate to x under no mode of presentation of x. But then I do object to saying of a person who seesn x in this way that she is conscious - thing-aware - of x at all. It seems to me that, for a person to be said truly (let alone felicitously or appropriately, in the pragmatic sense) to be conscious of x, she must be able to somewhat identify or recognize it. In Dretske's (1981) terms, it is not enough for a person to be conscious of object x that the person is in some state which carries analogically information about x. She must be able to digitalize somewhat this information: she must extract from the information analogically coded about x some definite piece of information to the effect that x possesses some property or other. Consider stepping on x. When we say that a person may step on x and have no belief about x, we come close to saying that the person may step on x and not be conscious of x. So Dretske cannot have his cake and eat it: he cannot both have the analogy between simple non-epistemic seeingn and stepping on something and make simple seeingn into a sufficient condition of creature-consciousness - even if it is thing-awareness.

In the case described, I have seen Alpha and Beta. I am aware (or creature conscious) of Alpha and Beta. If Alpha happens, unlike Beta, to contain Spot, then I am (creature) conscious of Alpha which happens to contain Spot - without of course being conscious that (or believing that) Alpha contains Spot. My experience of Alpha (which happens to contain Spot) need not be a conscious state. If I am not conscious of being in that state - if I have not formed a HOT about my experience -, then my experience of Alpha is a state of consciousness.

Now, one might argue that Dretske (1969: 20-30) has also linked simple non-epistemic seeing of an object to visual differentiation. But I think this is precisely the second horn of Dretske's dilemma.

 

[D4] If S seesn x, then x must be visually differentiated from its immediate surroundings by S.

 

Suppose we accept [D4], then perhaps we have a principle justifying [D1] where [D1] involves simple seeing: if S has in effect differentiated x from its surroundings, then S is conscious of x. But then, I suggest, it is hard to reconcile the acceptance of [D4] as a constraint on simple non-epistemic seeing with acceptance of [D3] and the comparison between simple non-epistemic seeing of x and stepping on x. In other words, it seems to me difficult to maintain [D4] and [D3]: How could S visually differentiate x from its surroundings and still have no belief whatsoever about x that it is F, for any property F of x? If one accepts [D4], I suggest, then it is hard to maintain any separation between epistemic and non-epistemic seeing or between fact-awareness and thing-awareness. In a nutshell, Dretske cannot have it both ways: either he gives up the claim that simple non-epistemic seeing is like stepping on something; or he gives up the distinction between thing-awareness and fact-awareness. Rather, the former collapses onto the latter.

My criticism of [D2] will be shorter. In reference to a famous example of Armstrong - the example of a truck-driver who has been driving without being aware of his own mental states -, Dretske (1993: 271) says that "the only sense in which [a state] is unconscious is that the person whose state it is is not conscious of having it. But from this it does not follow that the state itself is unconscious. Not unless one accepts a higher-order theory according to which state consciousness is analyzed in terms of creature-consciousness of the state". I want to return Dretske's compliment: I think [D2] in effect begs the question against the HOT theory. As I said above, not all states and processes occurring within a person and necessary to make the person conscious of things and properties need be conscious. So I do not think that it is a necessary condition upon a person's being conscious of something that any of the states she is in be a conscious state. Again, the perceptual states shich allow Armstrong's truck-driver to be conscious creature of his environment are not conscious states in the HOT theory sense; they are states of consciousness. There was something it was like for him to experience the driving even though he was not conscious of his experience.[4]

I now turn to Dretske's (1995) second criticism of the HOT theory. Chapter IV of Dretske (1995) is a sustained critique of the HOT theory of conscious states. My goal in the rest of the present paper is to show that most of Dretske's insights can be accommodated with the amended version of the HOT theory which I proposed above. As Dretske nicely writes (ibid.: 97):

 

Some people have cancer and they are conscious of having it. Others have it, but are not conscious of having it. Are there, then, two forms of cancer: conscious and unconscious cancer?

Some people are conscious of having experiences. Others have them, but are not conscious of having them. Are there, then two sorts of experience: conscious and unconscious experiences?

Experiences are, in this respect, like cancer. Some of them we are conscious of having, others we are not. But the difference is not a difference in the experience. It is a difference in the experiencer - a difference in what the person knows about the experience he or she is having.

 

As I will argue, I can, I think, accept everything Dretske says in this passage, and I claim that his insight is compatible with the HOT theory of conscious states suitably enriched with the notion of a state of consciousness. If, however, Dretske thinks that his insight implies a wholesale rejection of the HOT theory of conscious states, then I think he is wrong. In fact, there a strong reading and a weaker reading of the above passage; and the strong reading has a misleading implication. On the strong reading, what the comparison between experiences and cancers suggests is that the contrast between conscious and unconscious mental states is altogether confused or that Dretske wants to reject it. If so, then the very notion of state consciousness, unlike the notion of creature consciousness, would be confused. But this is unnecessarily strong and would be inconsistent with the rest of the chapter.[5] As I understand it, on the weaker reading of the above passage, what Dretske's comparison between cancers and experiences suggests is not that state consciousness is a confused notion or that only creature consciousness makes sense, but rather that state consciousness is not an intrinsic property of conscious mental states. If state consciousness is not an intrinsic property of a conscious state, then it is a relational property. And this view, as I said above, is part of the HOT theory of conscious states. Dretske's insight that intransitive state consciousness is relational feature of a conscious state is, therefore, compatible with the HOT theory of conscious states.

As I already said, not all of a creature's states and processes which are necessary conditions for making a creature conscious of things, properties and relations in her environment need be conscious states. This is perfectly consistent with the HOT theory of conscious states and obviously Dretske agrees with it. According to the HOT theory, for a creature's state to be conscious, the creature whose state it is must form a higher-order conceptual representation of that state. A creature's state will be conscious if the creature is conceptually aware of it. This implies that for a creature's state to be conscious, the creature must possess some conceptual way of metarepresenting her own states (as representations): she must have the concept of a representation - either or both the concept of a sensory experience and the concepts of propositional attitudes. She must be able to think of herself that she is having experiences and that she is having beliefs, desires and so forth.

Dretske thinks that the results of the developmental ontogenetic psychological studies of the human metarepresentational capacity (in the "theory of mind" paradigm to which I alluded above) constitute a "decisive" objection against the HOT theory. As e.g., the false belief task shows, not until age 3, do human children give evidence that they possess the concept of belief: not until 3 can they attribute to somebody else a belief about a state of affairs which differs from their own belief. Not until they are 3 year old can children, it seems, metarepresent their own representations or those of other people. How, Dretske asks, could such children have the higher-order thought that they are experiencing such-and-such or believing so-and-so if they do not have either the concept of experience and/or the concept of belief? It would be odd, Dretske concludes, to argue that the experiences of children who do not yet possess the concept of representation are not conscious. Let me quote Dretske's objection in full (ibid.: 110-11):

 

the question is not whether a two-year-old knows what a six-year-old knows (about its own experiences), but whether the experiences of a two-year-old and a six-year-old are, as a result of this fact, fundamentally different - the one being conscious, the other not. if that is a consequence of a HOT theory, it strikes me as very close to a reductio (it would be a reductio if we knew - instead of merely having strong intuitions - taht their experience wes not fundamentally different). If two year-olds are as perceptually conscious of external events and objects as their older playmates, if they see, hear, and smell the same things (as HOT theory acknowledges to be the case), why should the child's ignorance of the fact that it sees, smells, and hears things render its experience of them unconscious? What is the point of insisting that because they know less about their thoughts and experiences, their thoughts and experiences are different? Why not just say what I just said: that two-year-olds know less about their experience of the world but, barring other deficits (poor eyesight, deafness, injury, etc.), their experiences are pretty much the same as ours? That is what e say about their diseases. Why not abnout their experiences? Why collapse the distinction between S's awareness of X and the X of which S is aware in this place, but nowhere else?

 

Possibly, Rosenthal might want to insist that there is nothing it is like for a creature who cannot be conceptually conscious of her own experiences on the grounds that if one of her experiences is not a conscious state, then there will be nothing it is like for her to enjoy that experience. This is a strong HOT theory view according to which state consciousness of a sensory state simply coincides with there being something it is like for one to be in the state. This view, however, is not forced upon the HOT theorist who accepts the intermediate notion of a state of consciousness. The HOT theorist who accepts the intermediate notion of a state of consciousness may embrace Dretske's claim that a two-year old and an adult may enjoy the same experiences of external events and objects - they may be in states having the same non-conceptual content - without conceding that ipso facto the two-year olds' experiences are conscious. He might just use Dretske's suggestion that two-year olds' experiences of things are basically the same as adults' experiences in so far as experiences depend on phylogenetically based abilities. But, as Dretske says, two-year olds know less than adults about their experiences. The HOT theorist can, I think, gladly admit that intransitive state consciousness of experiences is precisely what Dretske calls knowlege about experiences. As I said above, phenomenal consciousness in Block's sense is what arises in creature-consciousness as a result of a creature's sensory experiences - it is a property of one of the creature's states of consciousness, not necessarily a conscious state. Intransitive state consciousness arises from the relation between a creature's mental states and the creature's higher-order thoughts about the lower -order mental states. Again, it is, on the HOT theory, a mistake to think of intransitive state consciousness as an intrinsic property of mental states. Rather intransitive state consciousness - whether of experiences or of beliefs - is relational: it consists in the relation between the state and some higher-order thought. This allows the amended HOT theorist, I believe, to accommodate Dretske's claim that the difference between a conscious and an unconscious experience is "not a difference in the experiences. The difference resides in what is known about them".

On the amended HOT theory of consciousness which I recommend, a creature's state will be conscious if the creature is conscious of it. A creature will be conscious of one of her sensory experiences if she forms a higher-order conceptual representation of her lower-order sensory state. Since, however, what it is like to enjoy an experience is a property of a state of consciousness and since a state of consciousness need not be a conscious state in the HOT theory sense, it follows that there may be nothing in what it is like to enjoy an experience which distinguishes the experience of a creature who can metarepresent her experience from the experience of a creature who cannot metarepresent his experience.

 

 

 

 

References

 

Astington, J.W., P.L. Harris & D.R. Olson (eds.) (1988) Developping Theories of Mind, Cambridge: Cambridge University Press.

 

Baron-Cohen, S., H. Tager-Flusberg & D.J. Cohen (eds.) (1993) Understanding Other Minds, Oxford: oxford University Press.

 

Block, N. (1990) "Consciousness and accessibility", Behavioral and Brain Sciences, 13, 4, 596-98.

 

Block, N. (1994) "Consciousness", in S. Guttenplan (ed.) A Companion to the Philosophy of Mind, Oxford: Blackwell.

 

Chomsky, N. (1990) "Accessibility 'in principle'", Brain and Behavioral Sciences, 13, 4, 600-601.

Davies, M. & G. W. Humphrey (eds.) (1993) Consciousness, Oxford: Blackwell.

 

Dennett, D.C. (1994) "Dennett, Daniel, C.", in S. Guttenplan (ed.) A Companion to the Philosophy of Mind, Oxford: Blackwell.

 

Dretske, F. (1969) Seeing and Knowing, Chicago: Chicago University Press.

 

Dretske, F. (1979) "Simple Seeing", in D.F. Gustafson & B.L. Tapscott (eds.) Body, Mind, and Method, Dordrecht: Reidel.

 

Dretske, F. (1981) Knowledge and the Flow of Information, Cambridge, Mass.: MIT Press.

 

Dretske, F. (1993) "Conscious Experience", Mind, 102, 263-83.

 

Dretske, F. (1995) Naturalizing the Mind, Cambridge, Mass.: MIT Press.

 

Marcel, A. (1983) "Conscious and Unconscious Perception: Experiments on Visual Masking and Word Recognition", Cognitive Psychology, 15, 197-237.

 

Nagel, T. (1974) "What is it like to be a bat?", in D. Rosenthal (ed.)(1991) The Nature of Mind, Oxford: Oxford University Press.

 

Rosenthal, D. (1986) "Two Concepts of Consciousness", in D. Rosenthal (ed.)(1991) The Nature of Mind, Oxford: Oxford University Press.

 

Rosenthal, D. (1990) "A Theory of Consciousness", ZIF Report n 40/1990.

 

Rosenthal, D. (1993a) "Thinking that One Thinks", in M. Davies and G.W. Humphrey (eds.) Consciousness, Oxford: Blackwell.

 

Rosenthal, D. (1993b) "State Consciousness and Transitive Consciousness", in Consciousness and Cognition, 2, 355-63.

 

Rosenthal, D. (1994) "What Makes Mental States Conscious", mimeo.

 

Searle, J.R. (1990) "Consciousness, Explanatory Inversion, and Cognitive Science", Brain and Behavioral Sciences, 13, 4, 585-95.

 

Searle, J.R. (1992) The Rediscovery of the Mind, Cambridge, Mass.: MIT Press.

 

 

Whiten, A. (ed.) (1991) Natural Theories of Mind, Oxford: Blackwell.

 



[1] I am grateful to David Rosenthal for many informative discussions about the topic of this paper in Montreal, to Claude Panaccio for his illuminating comment on the paper as it was delivered at the Conference and to Ned Block for detailed and clarifying comments on this paper.

[2] This is specifically what Claude Panaccio in his lucid commentary to the paper I delivered at the Conference exhorted me to do.

[3] I don't mean to prejudge the issue of whether, as some philosophers argue, there is something it is like to entertain thoughts or propositional attitudes. I assume that sensory properties of a mental state are paradigmatically responsible for the fact that there is something it is like to be in a state. But, as will appear later in the paper, I don't preclude that there is something it is like to have desires, e.g., having an urge to do something.

[4] Again, Rosenthal would probably disagree with me on this point.

[5] In a previous unpublished but circulated version of ch. IV, Drestke did, however, hold the view that creature consciousness is the only notion of consciousness which we need and that we can do without the notion of state consciousness. In the published version, he dropped this strong view.