MSM

Mens Sana Monographs 

A Monongraph Series Devoted To The Understanding of Medicine, Mental Health, Mind , Man And Their Maxtrix 

The Embodied Embedded Character of System 1 Processing

Abstract

In the last thirty years, a relatively large group of cognitive scientists have begun characterising the mind in terms of two distinct, relatively autonomous systems. To account for paradoxes in empirical results of studies mainly on reasoning, Dual Process Theories were developed. Such Dual Process Theories generally agree that System 1 is rapid, automatic, parallel, and heuristic-based and System 2 is slow, capacity-demanding, sequential, and related to consciousness. While System 2 can still be decently understood from a traditional cognitivist approach, I will argue that it is essential for System 1 processing to be comprehended in an Embodied Embedded approach to Cognition.

Keywords: Artificial IntelligenceCognitive scienceDual process theoriesEmbodied embedded cognition

Introduction

In his book, The Mind’s New Science, Howard Gardner (1985[]) describes five key features of traditional cognitive science: (a) representation; (b) computation; (c) de-emphasis on affect, context, culture, and history; (d) belief in interdisciplinary studies; and (e) roots in classical philosophical problems.

In order to explain human action and thought, cognitivists posit the level of representations where entities such as symbols, rules, images, schemas are related to one another, joined or transformed. The computer serves as an “existence-proof” – if the machine can reason, have goals, revise behaviour and transform information, then human beings can be characterised in the same terms. Cognitive scientists do not deny the importance of affect, context, culture, and history for human behaviour. However, for means of practice, they tend to exclude these factors or find processes where they seldom interfere. Since the beginning, efforts in cognitive science have been interdisciplinary: Linguists, psychologists, computer scientists, electrical engineers, anthropologists, and philosophers; all participated in the forming of such science, and continue to make it grow. Gardner (1985[]) believes cognitive science is rooted in philosophical questions, so that even those who are not aware of it are indeed in the quest of unravelling long standing epistemological mysteries.

Ironically, during the so-called “decade of the brain,” Andy Clark (1997[]) proposed that cognition should be viewed from a new perspective: One that considered how cognition was embodied and embedded in its environment. This perspective (which tends to decentralise the brain’s powers) was thus named the Embodied Embedded approach to Cognition (EEC). While features ‘d’ and ‘e’ of traditional cognitive science still remain in EEC, features ‘a’, ‘b,’ and ‘c’ were attacked.

Although proponents of Dual Process Theories come from the traditional approach to cognition, I will consider the possibility that such theories could be split into the two approaches so that each system is more adequately addressed.

Dual process theories

In the last thirty years, researchers in cognition have begun to characterise the mind in terms of two separate systems, starting with Fodor’s (1983[]) input modules and high cognition distinction. Much has changed since Fodor’s description in the last eleven years, when cognitive scientists working mostly with an emphasis on reasoning and decision taking have adopted use of the terms “System 1” and “System 2” processing to characterise cognition. Such terms were proposed by Stanovich and West (2000[]) as means of neutrality in between Dual Process Theories. Evans (2008[]) argues that many different groups of scientists (e.g. Chaiken, 1980[]; Epstein, 1994[]; Evans And Over, 1996[]; Fodor, 1983[]; Gigerenzer, 2007[]; Khaneman, 2011[]; Sloman, 1996[]; Stanovich, 2004[]; Toates, 2006[]; Wilson, 2002[]) have been describing the systems with their own research and while there is a common essence in the theories, there are also incompatible claims. In the future, it may be plausible to sustain a unified Dual Process Theory; however, more empirical and conceptual investigations are needed to formalise such theory. As a consequence of this problem, this article will focus on the Dual Process Theories of Stanovich (2004[]), Evans and Over (1996[]) and Evans (2003[]), and use some updates mentioned in Evans (2008[]).

As described by Stanovich (2004[]) and Evans (2003[]), System 1 is a concept that refers not to one system but a set of subsystems that operate with autonomy responding to their own triggering stimuli, and are not under the control of System 2 processing. This system is composed of mostly innately programmed behaviour, but new learned behaviour can become part of the system’s procedures as well. It is characterised as rapid, automatic, parallel, heuristic-based, relatively independent of computational power, and mostly domain-specific. It is automatic because it does not depend on attention, and its processes are not revealed to consciousness; processing in parallel means that it executes multiple operations simultaneously, and heuristic processes are those that are quick but imprecise. System 1 responds automatically to holistic prototypical properties of stimuli and tends to be ballistic, which means that once triggered it can seldom be stopped. System 1 processing is essentially mechanical in nature, so that mechanisms are fired ballistically – because of a match in its input search – even in contexts where they should not be fired, and run to completion even when situation has changed and its output is useless. However, they are efficient, reliable, and probably more easily selected in evolution. A reflex would be a classic example of System 1 processing; however, it is not limited to such; even decision-making can be largely processed by System 1. It is important to notice that what the theory is claiming is that many of our higher capacities, including reasoning and decision-making, are influenced by processes, which are rather like reflexes.

In contrast to System 1, and explaining our introspective intuitions on our mind, Stanovich (2004[]) and Evans (2003[]) posit System 2. This system is almost the opposite of the first; its processes are slow, capacity-demanding, sequential, and correlated with general intelligence and conscious awareness. Capacity-demanding means that they depend on higher processes such as working memory, especially central executive powers, which are limited to a few items but are refined. Sequential is a contrast to parallel, the process works one step at a time. Despite its limited capacity, System 2 permits us to sustain context-free mechanisms of logical thought, inference, abstraction, hypothetical thinking, planning, decision-making, and cognitive control. System 2 would be necessary to construct mental models, in the sense of Johnson-laird (1983[]), simulating and predicting the possible future. Furthermore, System 2 has an inhibitory function; with practice, it can override some of System 1’s responses, questioning, regulating, and reformulating them (Stanovich, 2004[]; Mograbi, 2011[]). It is also directly influenced by education and development. System 2 also composes a coherent story to explain all the activity that might be contradictory in nature in System 1. System 1 not only has automatic responses directed to behaviour, it can also deeply influence the processing in System 2.

Origins of the two systems

Generally, System 1 is thought of as shared with other animals and is older in evolution, while System 2 as uniquely human and hence more recent. The fact that System 2 is more recent in evolution is a general agreement; however, not all researchers agree that it is uniquely human. As Evans (2008[]) points out, perhaps it is more developed in humans but nonetheless present in other mammals as well. Evidence shows that there is a distinction between stimulus-bound and higher-order processes in many animals (Toates, 2006[]). Also, primates seem to exhibit higher abilities similar to ours (Mithen, 1996[]; Whiten 2000[]; De Waal, 2006[]). Nonetheless, it seems clear that System 2 processing in human beings has reached remarkable levels in comparison to other animals and even hominids. Evans (2003[]) discusses archaeological evidences that show how modern humans had cognitive advantages for survival over other hominids, relative to language, higher processing and new forms of thinking.

Evans (2008[]) warns us that using one word (System 1) to refer to a set of subsystems has some consequences. While many of System 1’s subsystems are certainly shared with many animals with primitive structures, there are those, which are more recent (e.g. theory of mind) and those, which are uniquely human (e.g. perhaps aspects of Chomsky’s universal grammar), so the system does not have a single evolutionary history; some subsystems are older while some are newer, some bear primitive survival techniques while others incorporate complex techniques (e.g. emotional expression recognition).

Stanovich (2004[]) bases his interpretation of Dual Process Theories in the selfish gene interpretation of Darwinian evolution. According to Dawkins (1976[]), central to the notion of evolution is the replicator, in the case of biology, they are the genes. It is common to hear that our genes’ interest is the survival of the species or that they are there to help in reproduction. Dawkins (1976[]) argues in the opposite direction, that we (called by him ‘vehicles’) are actually designed to serve the gene’s main interest, which is self-replication. Our children are not copies of ourselves and even less will their descendants be; however, they host copies of genes that have been replicating for billions of years. Some genes might even cause vehicle death if it is in the service of the goal of gene replication. Human beings are understood as tinkered together contraptions built by natural selection with the purpose of being a vehicle optimised for gene replication.

The main importance of the selfish gene interpretation for Stanovich’s Dual Process Theory (2004[]) is that in its light, we can understand our behaviour not always in service of our established goals, but as attempts to fulfil the genes’ ultimate goal of self-replication (For example, a mother might give away her lifelong plans, dreams, goals, and even more, food and life, to protect her gene’ s replication, her son). Stanovich (2004[]) talks of three goal groups, one that serves both gene and vehicle, one that serves gene alone, and one that serves only the vehicle. System 1 is a collection of older evolutionary structures that more directly code the goals of the genes. Their heuristic-based form of processing guarantees that goals are met exactly when needed, even though it might cost the firing of such processes in irrelevant contexts. Although System 2 might also have long-leash goals of gene replication, it is the only one capable (and specifically in humans) to override System 1’s goals so that those, which do not meet the vehicle’s purposes, are not fired. A simple example of meeting vehicle goals and not gene goals is the use of condoms. Therefore, System 2, being more flexible, would be responsible for establishing our own (vehicle) goals and overriding conflicting gene goals.

It is important to make clear that this brief explanation is of one group of Dual Process Theories. Also, additions and developments on Dual Process Theories have been recently put forward, with Stanovich (2010[]) arguing in favour of a tripartite model, by dividing system 2 in two subsets, the reflective subset and the algorithmic subset; and Carey (2009[]) adding a middle term system, which has both some specific features of Systems 1 and 2 called ‘core cognition.’

Embodied embedded cognition

The traditional approach to cognition and brain as an information processing central machine generally agrees that communication with environment happens by means of passive input and output interaction with the body and the environment. According to Chiel and Beer (1997[]), it is often understood that the central nervous system uses environment inputs and its internal state to plan future actions and impose motor commands to execute such plans. Van Dijk et al. (2008[]) say that traditional cognitive neuroscience believes cognition is accomplished by brain, given the environmental inputs, it is then something the brain does. To develop this idea, Singh and Singh (2011[]) explain a contemporary perspective of how mind-brain causality might work called the lattice of mental operations.

The Embodied Embedded view of Cognition (EEC) is also materialist, perhaps even further away from Descartes’ substance dualism, for it believes that there is more to cognition than even the brain; body and world being the other two components. Following the ideas of Clark (1997[]), Brooks (1999[]) and Chiel and Beer (1997[]), EEC argues that the physical composure of body and world and the internal states of the organisms are equally responsible for the realisation of behavioural interactions. Chiel and Beer (1997[]) argue that adaptive behaviour, understood as behaviour that enhances the survival and reproduction of an animal, is the result of continuous interactions between nervous system, body and environment, each responsible for complex dynamic work. The nervous system is not viewed as a programmer of behaviour, its role is to shape and evoke appropriate patterns of dynamics from the entire system.

Some metaphors are proposed to substitute the reminiscent hierarchical structure of the information processing ideas in studies of cognition. Chiel and Beer (1997[]) propose that the nervous system is one of a group of players engaged in Jazz improvisation, and the final result emerges from their continuing interplay, in contrast to a nervous system, which is a conductor of the other players. Van Dijk et al. (2008[]) argue that the brain is more of a ‘traffic facilitator’ rather than the powerful boss. This traffic facilitation is achieved by the brain’s monitoring internal states of the body and the ensuing labelling of the saliency of external objects.

This second metaphor owes its insights to reactive robots (Brooks, 1999[]). Van Dijk et al. (2008[]) explain that a reactive robot is composed of behavioural layers instantiating direct input-output coupling. There is no intermediate level between input and output, which is responsible for world modelling, planning, and decision-making as is common in cognitive models. Given its bodily possibilities and its history of interactions with the world, the behavioural layers compete for dominance and respond to stimuli in specific ways. Therefore, cognition, action, and world are structured together to form a temporarily stable pattern of behaviour, which is reliable for resolving a specific task. Van Dijk et al. (2008[]) call such structural couplings a ‘basic interaction cycle.’ Chiel and Beer (1997[]) argue that work in the field of autonomous robots has only found progress when intelligence creates itself as an emergent property of an agent continuously interacting with its embedded environment. Influenced by engineers, Chiel and Beer (1997[]) believe mechanical systems need to achieve a ‘mind of its own’ governed by their physical structure and the laws of physics. The role of the nervous system would be to make suggestions to be reconciled with the physics of the system and the task.

According to Van Dijk et al. (2008[]), the main task of the brain is to assist environment-driven selection from the behavioural repertoire, which the creature carries. It facilitates the display of relevant behavioural dispositions, inhibiting other dispositions, helping the system by making the environment and context activate the most adaptive behaviour. Its function is to help the creature ‘readjust on the spot,’ to be more effective at behaviour. In summary, the brain creates a functional cluster that presents the environment with optimal options to ‘choose’ from. This helps the basic interaction cycle to act faster and simply. Instead of replacing the basic interaction cycle, the brain functions as a “traffic facilitator.”

One of the founders of traditional cognitive science alerted that behavioural complexity might arise often from the complexity of the environment rather than from the complexity of the organism itself. Simon (1969/1996[]) exemplifies this with an ant walking on a beach. The ant starts exhibiting complex patterns of movements just by walking on the sand of a beach. One might mistakenly believe that it is consciously choosing, or representing, all the best paths and hence exhibiting computational complexity; however, the complexity is caused by environmental conditions, with bumps, elevations, peaks, holes, ‘choosing’ the path, which the ant is to follow.

Haselager et al. (2008[]) speculate that the organism’s adaptive behaviour fits so naturally with the world because of an environmental adequacy, which they call the user-friendly environment. It is an assumption that the personal environment in which a creature is embedded is not formed independently from its own behavioural and evolutionary history. Organisms, in contrast to traditional robots, do not wake up in completely unfamiliar worlds. Each creature ‘makes a living’ based on its sensory and behavioural capacities. An agent’s natural conduct can tend to match environmental structure in ways that turn out to be reliable for the agent. This intimate fit between organism and the environment it evolved in ensures that actions will generally prove to be adaptive.

Van Dijk et al. (2008[]) distinguish two forms of processing: Autopilot mode and deep thought. Having a user-friendly environment seems to enable the organism to navigate in ‘autopilot’ mode, so that behaviour flows naturally out of the organisms’ interactions with the world. During autopilot, the environment selects appropriate behaviours from the repertoire without the need of internal computations on representations and action plans. In contrast, deep thought mode is more careful, slower, requires actions on representation, and is able to make plans for actions. The relation of autopilot and deep thought modes in practice are not so rigidly separated: For example, deep thought processing might even elicit autopilot modes for action.

The embedded embodied character of System 1 processing

Van Dijk’s et al. (2008[]) distinction of an autopilot mode and deep thought bears a reasonable conceptual resemblance with System 1 and 2 processing, where the first underlies mostly autopilot behaviour while the second is mostly responsible for deep thought. However, we must be careful not to endorse that System 1 is engaged only in lower level processes, and System 2 is responsible for higher processes. There is a general agreement based on reasoning studies that many high processes such as thought and logical inferences can be influenced by the automatic responses of System 1. Also, the view that System 1 is older in evolution is consistent with EEC views that behavioural relations with the environment are older than the brain itself as explained in Van Dijk et al. (2008[]).

Following the three first key features of traditional cognitive science as described by Gardner (1985[]) (representation; computation; de-emphasis on affect, context, culture, and history), it seems System 2 can still be relatively well comprehended in terms of such science, and their efforts have made possible powerful System 2 machines, which are more skilled at calculations and analytical operations than man will ever be.

Representations are a key feature of deep thought. We can imagine places we went to years ago based on them; not only that, we can draw on mental maps by will, rotate represented images, and can recall useful information such as number patterns. Computation is certainly linked to deep thought. Alan Turing invented his abstract machine exactly by mimicking the steps he took when thinking and reasoning. System 2 processes are slow, serial, and describing its processes by computational rules has made possible victories of machines over brains in chess. Although affect, context, culture, and history are inevitably essential for any human, considering these factors (except for the first) is less important for System 2 than for System 1. How we process information in deep thought has little to do with who or where we are. All humans seem to have relatively the same constraints on such processes in all areas of activity. We can enhance this with education but there still are universal standards. An example is given by Miller (1956[]) with his studies of how all human working memory capacities can deal in average with seven items. Thus, context and culture do not seem to be influential variables when studying system 2. Affect is still a key element and has been considered as interfering with cognitive processes by the works of Damasio (1994[]). Recent proposals of how emotion might be related to cognition include Pereira Jr. and Almada (2011[]), Pereira Jr. and Furlan (2010[]), Pereira Jr., Pereira and Furlan (2011[]) and Vaillant (2011[]).

It is clear that traditional cognitivism does not understand the brain as completely isolated from the world or the body; the input and output mechanisms of the models guarantee such interaction. Although this much interaction mentioned is good enough for creating models of System 2 or deep thought processes, it is still scarce for System 1 processing levels of interaction.

One major mistake is thinking System 1 works exactly the same way as System 2, except unconsciously and automatically. In such a view, the first would just process unconsciously and faster the same exact steps taken in System 2 processing. This view comes from traditional cognitivism and seems to be the cause of the failure of the modelling of System 1 processes and subsequent adaptive behaviour of artificial intelligence creations. Connectionist attempts to solve this problem were not of great success either; parallel processing seems more fitting, but is perhaps more complex than needs be. That is because they were still using the brain as the only mechanism for cognition while the complexity could be coming from the environment. In fact, all of the three first key features mentioned by Gardner (1985[]) become extremely problematic when trying to model System 1 processes.

Representations are postulated as an intermediate level between perception and action. World input is analysed in terms of symbol manipulations, which represent the environment and respond with an adequate output. However, it seems that in its effort to oppose behaviourism, cognitive science got too carried away with using representations to explain behaviour, so that it resorts to it when it is not needed or useful. The abusive use of representations led to the well-known ‘frame problem.’ How should one use exactly the relevant information represented for some task? Do we consider all the possibilities before making a fast decision? Stanovich (2004[]) explains that system 1 processes are fast and do not waste central processing capacities. It fires quickly because there are few stimuli, which they are built to respond to. The role they take is fixed, it is not determined at the moment of usage. They are focussed on running to completion instead of deciding whether they should be useful or not in some given situation. Of course, embodied embedded mechanisms need to be coded in humans; however, an abstract level of representation and computation is unnecessary, the world actives mechanisms, which depend upon the responses of body and brain. Also, this is not a complete return to behaviourism as what needs to be modelled is not behavioural responses, but a dynamic system of interplay between body, brain and world; cognition, as fine tuning, not only behaviour.

Stanovich (2004[]) argues that certain situations in the world demand a quick response even at the risk of less than complete processing. Haselager (2004[]) believes the frame problem is not solved because of this abusive use of representational method. Since we need not think about all the knowledge we have and the steps we take in deep thought processes, it seems common sense behaviour does not depend on such processes. It seems System 1 processes work differently.

Computation is another key feature of traditional cognitive science, which does not work well with System 1 modelling. Cognitivists assume that central control systems based on information from input systems form beliefs about how the world is, and then select from the entire repertoire of actions those, which should reach a certain goal. Haselager et al. (2008[]) argues that both stages can run into the problem of computational intractability. Efficiently updating the whole web of beliefs seems computationally impracticable for brains with finite computational resources. As Haselager et al. (2008[]) illustrate, exhaustively searching the web of possible propositions to find which truth assignment (true or false) is supported by the observations at hand is impossible because of the number of possible beliefs and truth assignments and the time it would take for the search to be processed.

Gardner’s (1985[]) key feature ‘c’ of traditional cognitive science was the de-emphasis on affect, context, culture, and history. Perhaps the most problematic part of this key feature is the de-emphasis on context. System 1 fires irrespective of which context would be the correct one, but it is context which activates System 1 firing. This might sound contradictory, but it is only because System 1 is not precise. It might fire “believing” to be in the correct context when it is really not. It works by trial and error, so it might fire in incorrect contexts; however, it is still context, which activates them for action. It happens that it cannot know all characteristics of contexts, so it fires to some features of contexts. The problem is those same features might be present in other contexts where it should not fire. They are context-dependent, but that does not mean they will always succeed in identifying the correct contexts in which they should act; or, to put it in terms with the embedded perspective, contexts will not always activate the most adaptive System 1 processes.

How context is central for activating System 1 responses is made extremely clear by the so-called framing effects. Stanovich (2004[]) says that an extensive line of work in reasoning studies (especially Tversky and Kahneman [1981[]) shows how many people do not seem to have any such thing as a stable, well-ordered set of preferences (which is also in line with the non-computability of all beliefs problem). People’s choices can be altered by irrelevant changes in how the possible alternatives are presented. For example, they can choose to save 200 lives out of 600 but not choose to kill 400 lives, which would also save 200, depending on how the question is asked. This shows how context ‘chooses’ which System 1 processes should be fired. As Van Dijk et al. (2008[]) explain, a creature carries its set of potential behaviour with it across contexts, and when contexts fit with the repertoire, it is fired. Choosing to save 200 lives might be done by automatic processes that tell us to do good things, while not killing 400 would be influenced by processes that prevent us from doing bad things. When this is presented in a complex format, we are not aware that the result is the same, and thus such automatic influences prevail.

Representation, computation, and de-emphasis on context are all extremely problematic when it comes to System 1 modelling. Stanovich (2004[]) has identified that System 1 processes are inconsistent with the traditional effort of artificial intelligence. He says the differences in System 1 and 2 processing is consistent with a long-standing irony in history of artificial intelligence where tasks easy for humans (common sense) are hard for computers, and tasks hard for humans (logic) is easy for computers. The differentiation of systems proposed by Dual Process Theories easily accounts for this paradox. Computers do not have System 1 automatic mechanisms for specific tasks, which were shaped by evolution for thousands of years in humans; for example, they have no built-in heuristics for automatic face recognition. As for humans, the System 2 processing is a recent artefact of selection and is limited and slow; hence we cannot achieve the number of calculations per second that the computer is able to.

However, it is only in Van Dijk et al. (2008[]) that we find some insights on how System 1 should be studied. The workings of a basic interaction cycle may be essential for comprehending System 1 function, for it is world and body characteristics which primarily activate, in a given moment, specific subsystems of System 1 firing. During autopilot behaviour, the environment selects appropriate System 1 processes without any internally computed behavioural plan. Brain processes underlying System 1 seem to work more as a traffic facilitator than by computation on representations. Of course, since traditional cognitive science methods are not adequate, other methods need to be considered. Yet, to be considered, they need to be detailed and efficient. In the articles “Can there be such thing as cognitive neuroscience” (Van Dijk et al., 2008[]) and “A lazy brain? Embodied Embedded Cognition and Cognitive Neuroscience” (Haselager et al., 2008[]), some methods are proposed, which need to be put in practice to see if they account for System 1 processing. Anyhow, in theoretical terms, it seems EEC is a more adequate framework for understanding the workings of such a system.

Concluding Remarks[See also Figure 1: Flowchart of Paper]

Figure 1.

Figure 1

Flow chart of the paper

Cognitive science has been modelling System 2 processes that are effective at simulating deep thought behaviour. However, System 1 processes have been difficult to model from the traditional approach. Embodied Embedded Cognitive Neuroscience might have just the methods necessary for the understanding of System 1 processes. In conclusion, it seems the two traditions can been complementary, and that might be the key to a complete understanding of human behaviour and brain functions. For the near future, there needs to be theoretical and empirical work to unite Dual Process Theories into a single framework. Also, there needs to be empirical applications of EEC for System 1 studies. For a unified theory of cognition, it would be necessary to understand how System 1 processes and System 2 relate, the first being embodied and embedded and the second working with computations over representations. A final question is how these two systems relate to consciousness; some new insights on consciousness can be found in Mens Sana Monograph’s 2011 Theme Monograph: Brain, Mind, and Consciousness: An International Interdisciplinary Perspective [Online at: http://msmonographs.org/showBackIssue.asp?issn = 0973 – 1229;year = 2011;volume = 9;issue = 1;month = January-December]

Take home message

Both the traditional cognitive approach and the Embodied Embedded approach to Cognition have to be taken seriously in order to achieve a full comprehension of human processing skills.

While traditional cognitive views have done well modelling System 2 processing, the new Embedded Embodied approach may be a feasible solution for the long-standing problems in modelling System 1 Processing.

Questions that this Paper Raises

  1. Can the traditional cognitivist and Embodied Embedded approach be combined to better explain cognition?

  2. Is our brain processing running two or even more systems rather than one?

  3. In what ways can Embodied Embedded Cognition be applied to better suit System 1 comprehension?

  4. How should these two different processing systems be implemented into making a machine?

  5. Do Dual Process Theories have the power to combine the two apparently conflicting approaches to human cognition?

About the Author

graphic file with name MSM-11-239-g002.jpg

Samuel de Castro Bellini-Leite graduated in Psychology at the Centre for Higher Education of Juiz de Fora. He attended courses of Philosophy in the Federal University of Juiz de Fora (UFJF) and Neuropsychology in The Federal University of Minas Gerais (UFMG). Currently, he is in the Master’s Degree Programme in Philosophy at the State University of São Paulo (UNESP). He is a member of the Cognitive Studies Academic Group (GAEC), and his research project is linked to the thematic project “Systemics, Self-Organization and Information” at the University of Campinas (UNICAMP). His work focusses mainly on Philosophy of Mind, Cognitive Psychology and Philosophy of Cognitive Science and Artificial Intelligence.

Acknowledgments

Dr. Alfredo Pereira Jr. and Funding from “Coordenação de Aperfeiçoamento de Pessoal de Nível Superior” (CAPES).

Footnotes

Conflict of interest: None Declared.

Declaration

This is my original unpublished piece, not submitted for publication elsewhere.

CITATION: Bellini-Leite SDC. The Embodied Embedded Character Of System 1 Processing. Mens Sana Monogr 2013;11:239-52.

Peer reviewer for this paper: Donelson Dulany PhD

References

  • 1.Brooks R. Cambrian Intelligence. Cambridge: MIT Press; 1999. [Google Scholar]
  • 2.Carey S. The Origins of Concepts. Oxford: Oxford University Press; 2009. [Google Scholar]
  • 3.Chaiken S. Heuristic versus systematic information processing and the use of source versus message cues in persuasion. J Pers Soc Psychol. 1980;39:752–66. [Google Scholar]
  • 4.Chiel H, Beer R. The brain has a body: Adaptive behavior emerges from interactions of nervous system, body and environment. Trends Neurosci. 1997;20:553–7. doi: 10.1016/s0166-2236(97)01149-1. [DOI] [PubMed] [Google Scholar]
  • 5.Clark A. Being There: Putting brain, body and world together again. Cambridge: MIT Press; 1997. [Google Scholar]
  • 6.Damasio A. Descarte’s Error: Emotion, reason, and the human brain. New York: Avon Books; 1994. [Google Scholar]
  • 7.Dawkins R. The Selfish Gene. Oxford: Oxford University Press; 1976. [Google Scholar]
  • 8.De Waal F. Primates and Philosophers: How Morality Evolved. Princeton: Princeton University; 2006. [Google Scholar]
  • 9.Epstein S. Integration of the cognitive and psychodynamic unconscious. Am Psychol. 1994;49:709–24. doi: 10.1037//0003-066x.49.8.709. [DOI] [PubMed] [Google Scholar]
  • 10.Evans J. In Two Minds: Dual-Process Accounts of Reasoning. Trends Cogn Sci. 2003;7:454–9. doi: 10.1016/j.tics.2003.08.012. [DOI] [PubMed] [Google Scholar]
  • 11.Evans J. Dual-processing Accounts of Reasoning, Judgment, and Social Cognition. Annu Rev Psychol. 2008;59:255–78. doi: 10.1146/annurev.psych.59.103006.093629. [DOI] [PubMed] [Google Scholar]
  • 12.Evans J, Over D. Rationality and Reasoning. East Sussex: Psychology Press; 1996. [Google Scholar]
  • 13.Fodor J. The Modularity of Mind. Cambridge: MIT Press; 1983. [Google Scholar]
  • 14.Gardner H. The Mind’s New Science: A history of the cognitive revolution. New York: Basic Books; 1985. [Google Scholar]
  • 15.Gigerenzer G. Gut Feelings: The intelligence of the unconscious. New York: Viking Press; 2007. [Google Scholar]
  • 16.Haselager WF. O mal estar do representacionismo: Sete dores de cabeça da Ciência Cognitiva (The indisposition of representationalism: Seven headaches of cognitive science) In: Ferreira A, Gonzalez ME, Coelho JG, editors. Encontros com as Ciências Cognitivas. Vol. 4. São Paulo: Coleção Estudos Cognitivos; 2004. pp. 105–20. [Google Scholar]
  • 17.Haselager WF, van Dijk J, van Rooij I. A lazy brain? Embodied embedded cognition and cognitive neuroscience. In: Calvo Garzon F, Gomila A., editors. Handbook of Embodied Cognitive Science. Oxford: Elsevier; 2008. pp. 273–90. [Google Scholar]
  • 18.Johnson-Laird P. Mental Models: Toward a Cognitive Science of Language, Inference and Consciousness. Cambridge: Cambridge University Press; 1983. [Google Scholar]
  • 19.Kahneman D. Thinking, Fast and Slow. New York: Farrar, Strauss, Giroux; 2011. [Google Scholar]
  • 20.Miller G. The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychol Rev. 1956;63:81–97. [PubMed] [Google Scholar]
  • 21.Mithen S. The Prehistory of the Mind. London: Thames and Hudson; 1996. [Google Scholar]
  • 22.Mograbi GJ. Neural basis of decision-making and assessment: Issues on testability and philosophical relevance. Mens Sana Monogr. 2011;9:251–9. doi: 10.4103/0973-1229.77441. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 23.Pereira A, Jr, Almada L. Conceptual Spaces and Consciousness: Integrating cognitive and affective processes. Int J Mach Conscious. 2011;3:127–43. [Google Scholar]
  • 24.Pereira A, Jr, Furlan F. Astrocytes and human cognition. Prog Neurobiol. 2010;92:405–20. doi: 10.1016/j.pneurobio.2010.07.001. [DOI] [PubMed] [Google Scholar]
  • 25.Pereira A, Jr, Pereira MA, Furlan F. Recent advances in brain physiology and cognitive processing. Mens Sana Monogr. 2011;9:183–92. doi: 10.4103/0973-1229.77434. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 26.Simon H. The Sciences of the Artificial. Cambridge: MIT Press; 1969. [Google Scholar]
  • 27.Singh AR, Singh SA. Brain-mind dyad, human experience, the consciousness tetrad and lattice of mental operations: And further, the need to integrate knowledge from diverse disciplines. Mens Sana Monogr. 2011;9:6–41. doi: 10.4103/0973-1229.77412. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 28.Sloman S. The Empirical Case For Two Systems of Reasoning. Psychol Bull. 1996;199:3–22. [Google Scholar]
  • 29.Stanovich K. The Robot’s Rebellion: Finding meaning in the age of Darwin. Chicago: University of Chicago Press; 2004. [Google Scholar]
  • 30.Stanovich K. Rationality and the Reflective Mind. Oxford: Oxford University Press; 2010. [Google Scholar]
  • 31.Stanovich K, West R. Individual Differences in Reasoning: Implications for the Rationality Debate? Behav Brain Sci. 2000;23:645–65. doi: 10.1017/s0140525x00003435. [DOI] [PubMed] [Google Scholar]
  • 32.Toates F. A model of the hierarchy of behavior, cognition and consciousness. Conscious Cogn. 2006;15:75–118. doi: 10.1016/j.concog.2005.04.008. [DOI] [PubMed] [Google Scholar]
  • 33.Tversky A, Kahneman D. The framing of decisions and the psychology of choice. Science. 1981;211:453–8. doi: 10.1126/science.7455683. [DOI] [PubMed] [Google Scholar]
  • 34.Vaillant GE. The neuroendocrine system and stress, emotions, thoughts and feelings. Mens Sana Monogr. 2011;9:113–28. doi: 10.4103/0973-1229.77430. [DOI] [PMC free article] [PubMed] [Google Scholar]
  • 35.Van Dijk J, Kerkhofs R, Van Rooij I, Haselager WF. Can There Be Such a Thing as Embodied Embedded Cognitive Neuroscience? Theory Psychol. 2008;18:297–316. [Google Scholar]
  • 36.Whiten A. Chimpanzee cognition and the question of mental re-representation. In: Sperber D, editor. Metarepresentations. Oxford: Oxford University Press; 2000. pp. 139–67. [Google Scholar]
  • 37.Wilson T. Strangers to Ourselves. Cambridge: Belknap; 2002. [Google Scholar]