Those who ignore philosophy are condemned to repeat it

Those who believe themselves to be exempt from philosphy influence are usually the slaves of some defunct philosopher


(Adaptación de Paul Thagard de las frases de Santayana y Keynes)

domingo, 16 de octubre de 2011

"Folk Psychology: Simulation or Tacit Theory?" Stich and Nichols (1992). En Lycan and Prinz (2008)

"Action mirroning and action understanding: an alternative account". Csibra (in Haggart, Rosetti, and Kawato,

"Why There Still Has to Be a Language of Thought". Fodor (1987)

Hay traducción castellana como apéndice de Psicosemántica en Tecnos.

"The Appeal to Tacit Knowledge in Psychological Explanation". Fodor (1968)

"Mental simulation, Tacit Theory, and the Threat of Collapse" Davies and Stone (2001)

DRAFT (to be proofread)
-Firstly, A GENERAL QUESTION: Why is (so) important the alternative between theory- theory and simulation? What is at stake?

            One possible aspect of this question might be the idea of knowledge in theory-theory involving laws, generalizations, and/or formal rules. Simulation does not require knowledge in this sense.
It is clear the relevance of unconscious processes in our behaviour and even many psychologists point out how they underlie conscious mental states (and cause them, at least to some extent); the problem is whether to account for those unconscious processes in terms of (tacit) knowledge is justified. It is thought that human beings have a kind of innate unconscious system to produce language; due to this system we “know” how to use language, although we are no aware of the system itself. I think that this system is not “knowledge”, because there are different kinds of unconscious processes or unconscious mental phenomena and not all of them have to be knowledge. In my opinion, a key concept which I have not seen in these papers is memory: tacit knowledge should be something that you have stored in your memory and may be accessible. However, an innate and unconscious Universal Grammar (Chomsky) is not stored in the memory in this way. According to this criterion the ability to predict/interpret/explain the behaviour of other people could depend partly on tacit knowledge, but also on unconscious capacities not stored in memory (for example, mirror neurons and their operations). As a general rule, things that are innate are not stored in memory, but in other systems (or ways); therefore they is no (tacit) knowledge.
Possibly the notion of explicit representation somehow corresponds to the criterion based on memory, although they differ in some respects (interesting: analyzing these differences).
Explicit representation may be another controversial issue:
Does require tacit knowledge explicit representation?
The answer given by Heal is NO. But is more difficult to see why or how Davies and Stone connect this question to “the mirroning relation between causal structure and derivational structure” (it seems to me that this relation is, at least apparently, similar to the relation between syntax (causal) and semantic (inferential) in Fodor).
What does mean explicit representation? (An answer is provided in the article: language-like format, sentence-like representations).


PERSONAL AND SUBPERSONAL LEVELS
 The final goal of the debate, as far as I know, concerns interpretations/predictions/explanations at the level of folk psychology[A1] . Since folk psychology runs at the personal level, this level turns out to be crucial. On the other hand, it may be uncontroversial that there are specific relations between personal and subpersonal levels (for example in so far as it is known that mental personal behaviour is connected to particular neural activity), the point is to figure out the nature of these relations and when they are relevant (autonomy, reduction, etc. –By the way, autonomy is relative, that is, concerns to a particular aspect or description, as there are actual relations-). Therefore, there are also subpersonal levels which can be considered if it is showed that they are relevant regarding the personal level. Several problems might be found in this respect: the distinction between biological (mainly neurobiological) subpersonal levels, physical - chemical levels, and psychological subpersonal levels (information processing mechanisms could be just a theory about these levels but I am not very familiar with this concept) is sometimes not taken into account.

QUESTION:
Although subpersonal levels are always unconscious, can be considered any unconscious mental state as a personal?







 [A1]So, related to mindreading, to the theory of mind issue and to social cognition. But also to the computational theory of mind and AI (in Fodor explicitely). 

"Beliefs and Subdoxactic States" Stich (1978)

DRAFT (to be proofread)

So far I thought that, according to functionalism, the key property of intentional mental states was, apart from intentionality itself, their causal role. However, in the Stich paper causality appears connected to subdoxactic states, while the relation between intentional states is mainly conceived in terms of inference[A1] . Moreover, Stich suggests certain link between inference and consciousness (what implies a link between intentional states, esenctialy beliefs, and consciousness), whereas in the standard cognitive-scientific view the link traditionally underlined has been between intentional states and unconscious processes (For instance, Fodor 1987).
Although the link between inference and beliefs is in general clear, sometimes the position of the author is rather confusing, for instance, he admits that “inference relation includes other states in addition to beliefs” (514). It is true that he assume the existence of “specialized and limited” (507) inferential contact between beliefs and subdoxactic states, but in other occasions he emphasizes the opposite view: the two features which distinguishes beliefs from subdoxactic states are the acces to consciousness and inferential integration.
Why is important the distinction between beliefs and sybdoxactic states? Apparently the reason, according to Stich, would be the acceptance of intuitive psychology theory (which is coherent with this distinction, unlike the theory posited by Harman). But it is a sufficient reason (or ever the only one)? (Stich also suggests the importance to identify a particular psychological state -500, also 514-515-).
In general, the article shows that the concept of inference is problematic and possibly ambiguous (see 512 and 513 –perceptual inference, computational inference, etc.). Stich claims that the use of the concept in this context is an empirical question (513-514), but a matter of definition, I think, rarely is just an empirical question.   
Another interesting point is whether beliefs have to have the form of linguistic representation. In this were the case, then linguistic representation would be a criterion for beliefs (516-517).

Additional question: It is not very clear the relation between simulation and inference: the reason for neglecting the distinction, the author suggests, could be that “like Harman, many of those concerned with cognitive simulation have been so captivated with the promise of inferential accounts of the mechanisms underlying perception and thought that they have failed to note the rather special and largely isolated nature of the inferential processes between beliefs and subdoxactic states” (517). 


ANOTHER QUESTION: THE NON-INFERENTIAL RELATIONS BETWEENS BELIEFS: How are interpreted in this context contradictory beliefs, inconsistency, etc.?





 [A1]Could be worhty to note that Fodor posited a kind of nondemostrative inference: "the inference from like effects to like causes" (Fodor 1968, p. 627).

"Multiple Realizability Revisited: Linking Cognitive and Neural States" Bechtel and Mundale (1999). "Neuroscience and multiple realizacíon: a reply to Bechtel and Mundale" Aizawa (2009). "Levels, individual variation, and massive multiple realization in neurobiology" Aizawa and Gillett (in Bickle, ed., 2009, 539-578)

jueves, 6 de octubre de 2011

Mental representation: An introduction. Jerry Fodor (1987)

 DRAFT (to be proofread)


CONSCIOUS AND UNCONSCIOUS BELIEFS.
When Fodor distinguishes two kinds of mental states he presents “propositional attitudes” defending the notion of unconscious mental states (1987, 106) and adds “For example, beliefs and desires, unlike itches and after images, can and often do lead an entirely dispositional existence; and what is entirely dispositional is presumably ipso facto not conscious” (107). While the primary feature of Qualitative mental states is awareness, the essential property of Intentional mental states, such as beliefs and desires, is aboutness (108), that is, they have intentional content. Cognitive science deal with the later kind of mental states (in fact, “from the cognitive scientist´s point of view, consciousness is an embarrassment, a pathological condition of the mental states that he studies” -106-).

CAUSAL AND INFERENTIAL RELATIONS.
 “The intentional mental states seem to be closely implicated in the causation of behaviour; specifically, of intelligent, higher cognitive behaviour” (Fodor 1987, 108).  Cognitive science shares with commonsense or folk psychology the idea that the behaviour of people is explained by their intentional mental states, basically because they play a causal role with respect to this behaviour.
Moreover, there are also causal relations among beliefs and desires (Fodor 1987, 123).
Fodor introduces inferences when considers the, perhaps, “hardest” question “that a theory of mind has to face” (123), namely “what is for a mental state to have
Intentional content” (ibid.). Fodor suggests that the answer can lies in the connection between the intentional content and the inferential role of a mental state. The examples of this inferential role are logical: “conjuntive belief is inferable from both of its conjunts while the discjuntive belief is inferable from either of its disjuncts” (124). This connection will turn out crucial in the Fodor view since it underlies his main aim: a theory which provide a bridge between causal role and intentional content. Such a bridge will be a sort of “parallelism” (124), due to the fact that inferential relations among symbols can be “mimicked” by their syntactic relations[A1] .
[Another question: It is consistent the statement that “Beliefs have their causal roles in virtue of their contents” (Fodor 1987, 111) with the claim that the causal power depend upon the syntax (see 125)?]




 [A1]Thus, a semantic relation, such as deducibility (this is the inferential element), can be mimicked by a syntactic relation, such as derivability (see 125). 

"Motor abstraction: a neuroscientifi account of how action goals and intention are mapped and understood". Vittorio Galese (2009)

El concepto de representación: aparece explicitado en una nota en la ´página 487. Coincide con Fodor -y con la nocicón que se maneja en la Ciencia Cognitiva estándar- en que es un tipo de contenido, pero se distancia del modelo simbólico y se acerca a la visión de la corriente conocida como "embodied cognition". La naturaleza prelinguística de estas representaciones es interesante: por una parte también la representaciones simbólicas propuestas por Fodor son prelinguísticas, sin embargo están concebidas siguiendo un paradigma lingüístico (de ahí que hable de "LANGUAGE of thought¨).

lunes, 3 de octubre de 2011

"True Believers: The Intentional Strategy". Daniel Dennett (en Lycan and Prinz 2008)

-Defensa (moderada -v. la distinción entre las dos afirmaciones empíricas-) del modelo computacional de la mente y del "language of thought".
-Me siguen pareciendo oscuras las nociones de "isomorfismo" y "reflejo" (mirror).
-Interesante revisar y analizar el concepto de representación.
-¿Qué aporta el ejemplo del termostato al debate?
-Acertadamente distingue entre dos afirmaciones empíricas:

domingo, 2 de octubre de 2011

“What Might Cognition Be, If Not Computation?” Tim Van Gelder


Probably the significance of this article is due to fact that it provides a potential answer to the “what else could it be” argument, since the goal of the author is to propose a viable alternative to the computational paradigm in cognitive science (346, 359).  According to this alternative, cognitive systems may be dynamical systems[AM1] . One of the most relevant features (if not the most) of this view is that, in contrast with orthodox-computational cognitive science, it eliminates representations -more precisely: at least certain kinds of systems are better understood excluding any reference to representations-.

The core of the argument deal with a device –called governor- of which purpose is to regulate steam power. This devise can be describe either in computational terms, that is, in form of an algorithm involving the manipulation of symbolic representations, or in a non-representational way, which coincides –this is a good rhetorical effect- with the actual solution to the problem, namely, the actual device built by James Watt.

The relation between certain parts of the device, the arm angle and the engine speed, is a key factor in the machine operation. According to the author this relation is a particular kind of dependence, which is non-representational. In fact, in order to describe such interaction a more powerful conceptual framework is needed, this framework is the mathematical language of dynamics.  The relevant suggestion whit regard to our course is that “Cognitive systems may in fact be dynamical systems, and cognition the behaviour of some (non-computational) dynamical system” (358). The crucial point is how to establish “the bridge”, that is, the relevant relationship, between the Watt device and the human cognition. This connection is provided by the motivational oscillatory theory (MOT) (361) [It is difficult for me to follow the mathematical explanation], mainly because “In MOT, cognition is not the manipulation of symbols, but rather state-space[A2]  evolution in a dynamical system” (362). (MOT is a case within a more general dynamical framework called for some authors “decision field theory”).
As far as I understand the argument the key feature is expressed in this sentence: “And it would be a model in which the agent and the choice environment, like the governor and the engine, are tightly interlocked” (360). 

CONSEQUENCES REGARDING EMBODIED COGNITION
We can call this point “The autonomy question”: To what extent is the brain autonomous or self-contained regarding cognition? This is a significant issue in the Gursh reply. Van Gelder[AM3] , indicates that given that according to the computational view “the cognitive system traffics only in symbolic representation, human body and the physical environment can be dropped from consideration; it is possible to study the cognitive system as an autonomous, bodiless, and worldless system whose function is to transform imput representation into output representations”  (373). Conversely, for the dynamical conception “the cognitive system is not just the encapsulated brain; rather, since the nervous system, body, and the environment are all constantly changing and simultaneously influencing each other, THE TRUE COGNITIVE SYSTEM IS A SINGLE UNIFIED SYSTEM EMBRACING ALL THREE” (373, see also 379).
(This is an expression of the continuous, simultaneous and mutually determining change which characterizes a dynamical system).
Another argument, among others, for the biological plausibility of this model concerns time. The role of time is more explicit and relevant in dynamics than in computational models, and “Timing is critical to a system that operates in a real body and environment” (379).


-Concerning the Cartesian assumptions considered by Grush, Van Gelder presents his notion of cognition as embodied and embedded in contrast to the Cartesian view and in concordance with the anti-Cartesian movement. In this sense, the dynamical account of cognition may be understood as a post-Cartesian approach to the mind and to the nature of the human being (see 380-381).
-Interesting remark: connectionist models are only a particular subcategory of dynamical systems. The kind of dynamical models suggested by the author for the cognitive system are different from connectionist model (see 370, 371)

SOME QUESTIONS
-In these two articles the notion of representation plays a significant role,  I find that a potentially problematic aspect of the nature of representation is the relation between representation and symbol. Van Gelder talks of “symbolic representation” (350, 353), in a sense in which is difficult to distinguish between symbol and representation. Is every representation symbolic?  On the other hand, the representational function appears to be a necessary element in the definition of symbol.

-When the author describes the computational model, he recalls that computation implies to manipulate symbols, and points out, that these symbols “have meaning” (350).  Does this mean that the machine (the device or its parts) work with semantic properties?

-According to Van Gelder if a system is not representational then it is not computational. Can be possible computation without representation? On the other hand, the author suggests the possibility that a dynamical system incorporates representations within a non-computational framework (376, 377).

-“The computer does not realize the abstract dynamical model; rather, it simulates it” (369). What is the difference between realization and simulation?

-Finally, What is the strongest argument to claim that human cognition is a dynamical system?



 [AM1]Dynamical systems are defined “as state-dependent systems whose states are numerical (in the abstract case, these will be numbers, vectors, etc.; in the concrete case, numerically mensurable quantities) and whose rule of evolution specifies sequences of such numerical states” (368).

 [A2]the notion of state-space: The concept of state-dependent system:  “A (concrete) state-dependent system Is a ser of features or aspects of the world which change over time interdependently…” (363).

 [AM3]Another question could be whether the modularity described in pp. 372-373 is correct.




 FURTHER QUESTIONS ABOUT DYNAMICAL SYSTEMS:
The mathematical nature of dynamical systems (“in the concrete case, numerically measurable quantities” -368-) arises some questions:
-It is possible to account for consciousness by means of a set of equations?
-Is this model compatible with the free will?
Can this model account for the “interface problem” (the relationship between personal and subpersonal levels?
-Does follow –obey- both the brain and the mind –that is, physical and mental properties- the same equations?
-If the operation of the mind is capable of being expressed in a set of equations does entail the existence (and the identification) of psychological laws?

-To what extent criticism over the crucial example of dynamical research on human decision making (basically MOT) can affect to the whole theory regarding cognitive science?
One important point in this sense could be: MOT is characterised in this way: “The framework thus includes variables for the current state of motivation, satiation, preference, and action (movement), and set of differential  equations describe how these variables change over time as a function of the current state of the system” (361).
 How the variables are identified and picked up (chosen)? And how the numerical value is assigned to every variable?  I think that this is a crucial problem since the model does not provide an answer and given that it is critical en the final result. Similarly, to contemplate an entire cognitive system (not just a particular task, skill or behaviour), for example the brain-body-environment system, or merely the mind-brain system, as a dynamical system arises the same questions.






 [AM1]Dynamical systems are defined “as state-dependent systems whose states are numerical (in the abstract case, these will be numbers, vectors, etc.; in the concrete case, numerically mensurable quantities) and whose rule of evolution specifies sequences of such numerical states” (368).

 [AM2]Another question could be whether the modularity described in pp. 372-373 is correct.

sábado, 1 de octubre de 2011

“In Defense of Some `Cartesian` Assumptions Concerning the Brain and Its Operation”. Rick Grush (2003)

DRAFT (to be proofread)

Important point: to look for an alternative model to the computational theory (in other “machines”)

The author argues against two main theses related to the embodied cognition conception (and to other close approaches in cognitive science), claimed by who Crush called “the radicals” (Haugeland, Brooks, Clark, Van Gelder and Thelen, among others):
-The mind is not in the brain: cognition involves the interaction between brain, body and environment in such a way that the brain cannot be separate from this system [I will dub it “thesis BBE”].
-Cognition does not require representation [I will call it “thesis NR”].

The `Cartesian` assumptions will be the two opposite ideas: the mind have and manipulate representations [perhaps the claim here is too strong], and the mind is autonomous. But at the same time the author rejects standard cognitive science, in particular symbolic computationalism, so REPRESENTATION WITHOUR SYMBOLS (or without symbolic computation). But, what is the meaning of `representation` in this context? The answer is provided by the emulation theory: roughly speaking, the brain have two parallel functions, one as a controller and other as a emulator, the representations are embodied in the emulator
“The world is not enough” (86) (Cf. Brook)
Key argument against the view of representation in Gelder: p. 65
Brook: mechanisms interacts directly with the world, they don’ t need a kind of “in-between” construction (that is, representations) (see 68).

I am specially concern in the role of neuroscience in the enquiry on mind, and a significant feature of this article is the attention drawn [participio pasado comprobar] to real biological systems, particularly to the brain, along with some suggestions content about neuroscience (see 64).
The emulation framework is a kind of information processing strategy that in fact is implement in the nervous system: “The main idea is that musculoskeletal emulator can process efferent copies and provide predictions of what the peripheral signal will be before the real peripheral signal is available” (76). Evidence Gush mentions to support his claim is that certain movement errors have been explained  (Wolpert, Ghahramani and Jordan, 1995)  “on the assumption that during the initial phases of a movement, before proprioceptive feedback is available, the motor centers exploit feedback from an internal model of the musculo-skeletal system, while as time progresses, feedback from the periphery is incorporated into the estimate” (77).  This internal model would be the emulator [related also to the simulation theory]
Another empirical evidence: Visual imagery (78) and visual perception (see Kosslyn)
PROBLEM: difference between on-line and off-line emulator operation  
PROBLEM: How the argument about emulator and representation match with the argument about components
QUESTION: the argument based on the plug criterion and the pilot example (80) is enough to conclude that “the brain is a notionally self-contained   locus of representational efficacy, not something that is essencially dependent upon the environment with which it is interacting…” (81). How the fact that the brain contains both the controller and the emulator is crucial for the brain to be a self-contained system?
The key point, I think, is that the emulator can operates without interaction with the real world

The argument rests on the emulation framework (74), in which is based the emulation theory of representation “according to which the brain constructs inner dynamical models, or emulators, of the body and environment which are used in parallel with the body and environment to enhance motor control and perception and to provide faster feedback during motor processes, and can be run off-line to produce imagery and evaluate sensoriomotor conterfactuals” (53).

The author agree whith the radicals on many points (see 55, 87-88):
-The emphasis in sensoriomotor control.

The Watt governor and the Timothy van Gelder interpretation
The Watt governor is a feedback controller. Many feedback controller work because they form a dynamical system. Crush calls the strong coupling thesis the claim that “brain and body/environment are coupled in the way that a controller and plant are coupled in a feedback control scheme” (59). (closed-loop control and mutual interaction).  This lead to an “radical” idea called by the author “the strong coupling thesis” (59), according to which “the brain and body/environment are coupled in the way that a controller and plant are coupled in a feedback control scheme –they are in a state of mutual causal influence, and appropriate behaviors are an outcome of this interaction” (59).  See “the dependence thesis”.

ONE PROBLEM: How identify and individualize (analyse) components? (67)
-Functions: Interface/component argument (Haugeland)
-Activities (Brook)
-Plug criterion (Gursh)


I find very interesting the idea that the brain could employ different strategies for different purposes, and from this perspective different models of the mind/body system may be right or reveals some aspect of its organization. In this sense the goal of the author would be to add the emulation framework to a set of tools consisting of theories such as classical symbol manipulation, connectionism, and dynamical system theory (88).

Finally, could be interesting to note the rhetorical or argumentative strategy followed by the author, namely, to accept some premises from those he is going to criticize leading to different conclusions:  “…Showing how even on their own terms, there is a powerful model of sensorionotor function that underwrites the Cartesian assumptions” (55). Another example is that  the concept of representation satisfices the conditions established by Haugeland in order to be a representation (see 83-84).