D Dance ▶ Dancing: A Nonverbal Language for Imagining and Learning Dance Learning ▶ Neurophysiological Correlates of Learning to Dance Dancing: A Nonverbal Language for Imagining and Learning JUDITH LYNNE HANNA University of Maryland, College Park, MD, USA Synonyms Dance; Kinesthetic communication; Performing art Definition Dance is human behavior composed of purposeful, intentionally rhythmical, and culturally influenced sequences of communicative nonverbal body movement and stillness in time, space, and with effort. Dance stylizes movements, some from everyday life, with a degree of conventionality or distinctive imaginative symbolization. Each dance genre has its own ▶ aesthetic (standards of appropriateness and competency). Theoretical Background Dance can engender visions of alternative possibilities in culture, politics, and the environment. Moreover, dance can also foster creative problem-solving and the acquisition, reinforcement, and assessment of nondance knowledge, emotional involvement, social awareness, and self and group identity (Hanna 2008). Dance is captivating nonverbal communication that involves attention networks, motivation, and reward. Nonverbal communication includes the bodily conveyance of information through gesture and locomotion, proximity, touch, gaze, facial expression, posture, physical appearance, smell, and emotion. Evolutionary biologists note that humans must attend to motion for survival – to distinguish prey and predator, to select a mate, and to anticipate another’s actions and respond accordingly. Humans first learn through movement, and movement facilitates learning. Sensory-motor activities form new neural pathways and synaptic connections throughout life, and the merger of body, emotion, and cognition in dance may lead to effective communication, the medium of learning. While humans alone among species have art experiences without obvious evolutionary payoff, dance engages innate “play” brain modules that allow us to consider hypothetical situations so that we can form plans in advance of difficulties, symbolically confront current problems, and manage stress. Speech refers to the oral/auditory medium that we use to convey the sounds associated with human languages. Language, however, is the method of conveying complex concepts and ideas, representations of information and relationships, and a set of rules for how these may be combined and manipulated with or without recourse to sound (Clegg 2004:8). Multiple possible “languages of thought” play different roles in the life of the mind but nonetheless work together (pp. 1, 200). Dance is a language that bears some similarities to verbal language. For example, both have vocabulary (locomotion and gestures in dance) and grammar (rules for different dance traditions in putting the vocabulary together and justifying how one movement can follow another). And both languages have semantics (meaning). Verbal language strings together N. Seel (ed.), Encyclopedia of the Sciences of Learning, DOI 10.1007/978-1-4419-1428-6, # Springer Science+Business Media, LLC 2012 906 D Dancing: A Nonverbal Language for Imagining and Learning sequences of words, and dance strings together sequences of movement. However, dance more often resembles poetry, with its multiple, symbolic, and elusive meanings, than prose. Complex logical structures are more easily communicated in verbal language. Although spoken language can simply be meaningless sounds, and movements can be mere motion, listeners and viewers tend to read meaning into what they hear and see. The vocal, auditory, and static visual channels predominate in verbal language, whereas the motor, moving visual, and kinesthetic channels predominate in dance. Verbal language exists solely in a temporal dimension, whereas dance involves the temporal plus three dimensions in space. Multisensory, dance heightens a perceptual awareness that expands access to clues to the meaning of different emotions, a significant source of human motivation. Emotion may prime some goals and processes while inhibiting others. In dance there is the sight of dancers moving in time and space; the sound of physical movement, breathing, accompanying music and talk; the smell of dancers’ physical exertion; the tactile sensation of body parts touching the ground, other body parts, people or props, and the air around the dancers; the proxemic sense of distance between or among dancers and dancers and spectators; and the kinesthetic experience (a performer feeling the moving sensual body and a spectator empathizing with the performer’s bodily movement and energy). The eyes indicate degrees of attentiveness and arousal, influence attitude change, and regulate interaction. In addition, the eyes define power and status relationships. Effective communication, of course, depends upon the shared knowledge between dancer and audience. A dance learner may feel stressed by “not getting it” or receiving negative feedback from teachers and others. Performance anxiety affects novice and pro alike. On the other hand, mastery of dance makes one feel satisfied, confident, and proud. Performance can give a feeling of the “runner’s high.” Individuals usually find strength in the self-mastery required in learning to dance and feel supported by others in cohesive group dancing. Performers feel accomplishment as they express the sense of doing something and being in control, of achieving what others want to do, try to do, but cannot do well, and the exhilaration of performance. Of course, dance is art and entertainment that diverts performers and audiences alike from stressors (Hanna 2006). While dancers and their audiences can sense the feel and command of the human body in dance, the mind stirs the imagination, directs movement, and makes sense of feeling. Feeling a particular emotion, a performer may immediately manifest it through dance, and dancing may induce emotion through energetic physical activity or interaction between or among dancers, or dancers and spectators. Alternatively, during a performance, a dancer may recall an emotion from earlier personal experience and use the memory as a stimulus to express the emotion symbolically, creating an illusion of the emotion rather than feeling its actual presence (Hanna 1983). A dancer’s purpose may be to provide an emotional experience, to conceptualize through movement, or to play with movement itself. In telling stories through dance, troublesome themes, like fear, can be held up to scrutiny, played with, distanced, made less threatening, and even move people to social action. Dance can convey meaning through devices and spheres. The most common device for encoding meaning is through metaphor, the expression of one thought, experience, or phenomenon in place of another that it resembles. A common sphere in which devices operate to convey meaning is the whole pattern of performance, emphasizing structure, style, feeling, or drama. Dancing involves declarative knowledge (including concepts, history, movement vocabulary, and grammar) that may be visualized in choreography. Procedural knowledge, knowing-is-in-the-doing, or embodied knowledge, is attained through multiple sensory perception, especially kinesthesia. This knowledge incorporates motor skills and “muscle memory” (proprioception felt in the body), cognitive skills and strategies that enable the application of patterns in communicating ideas and feelings in dance. People learn by doing – action being the test of comprehension, and imagination the result of the mind blending the old and familiar to make it new in experience. Tacit knowledge is knowledge that cannot be articulated verbally but may be expressed kinesthetically and emotionally through dance. A difference between declarative knowledge and procedural knowledge is that each likely activates different parts of the brain. Someone can know about a dance form and yet not have the skills for performance. After a cognitive stage in which a description of a procedure is learned, skill learning has an associative Dancing: A Nonverbal Language for Imagining and Learning stage in which a method for performing the skill is worked out, and finally an autonomous stage in which the skill becomes automatic. The creativity of making set dances and improvising within a style requires declarative and procedural knowledge (usually tacit) of relational rules for matching movements with appropriate meanings. These rules emphasize digital, analytical, and sequential processing of information. The underlying processes are hidden, rapid, multimodal, nonlinear, and nonverbal, and the dance evolves from experimentation and exploration in the medium itself. Dancemaking involves composing movement phrases and eventually long sequences, evaluation, changing, reevaluating, deleting, and adding. By contrast, dance imitation, or dancing someone else’s choreography, depends on learning a set pattern involving analogical and spatial abilities. Observation, inferring the mental representations that underlie dance, and storing the representations in memory are required. Imitation is not strict copying but a constructed version, an interpretation through emotional expression of what is imitated. Improvisation refers to extemporaneously creating dance out of what is known. Understanding dance requires reasoning, an understanding of symbols, the ability to analyze images, and knowing how to organize knowledge. Given a student’s familiarity with dance elements, dance may be a means of testing a student’s understanding of nondance subject material. Translating emotions and ideas from one medium to another in a different context, such as thinking metaphorically through a physical embodiment of written or spoken text requires an understanding of subject matter. This creative process can reveal knowledge acquired and what further instruction is necessary. Important Scientific Research and Open Questions Dance education in the schools has been demonstrated to be an engaging, emotional, and cognitive way of solving problems as it communicates declarative and procedural knowledge through various devices and spheres of embodying the imagination (Hanna 1999). Qualitative research in Africa found that specific dancers perform contrastive movement patterns to identify their distinct biological and social roles. D Among the Ubakala Igbo in Nigeria, when women are life-giving mothers, and men are life-taking warriors, women dance slowly and effortlessly in circles, whereas men dance rapidly and forcefully in angular lines (Hanna 1987). Evidence of the potency of nonverbal communication comes from psychologists Goldin-Meadow (2002) and her colleagues who focus on hand gesture, merely one of the dancer’s communicative body parts: When produced beside speech, gesture becomes image and analog. However, when called upon to carry the full burden of communication, gesture takes a languagelike form using word and sentence level structure. Extrapolating from hand gestures to dance promises an exponential impact: Dance utilizes a multichanneled system of gestures of various body parts moving in time, space, and with effort, music, and costume. Findings about learning a second or third verbal language seem applicable to learning a nonverbal language such as a dance genre, and to even learning more than one kind of dance. Youngsters who grow up multilingual have more brain plasticity and multitask more easily. Moreover, learning and knowing a second or third language may use parts of the brain that knowing only one’s mother tongue may not. Dancers who only have classical ballet training often have difficulty picking up contemporary movements. Through magnetic resonance imaging technology, neuroscientists are discovering regions of the brain dedicated to perceiving, reacting, remembering, thinking, creating, and judging (Grove et al. 2005). Neuroscientists discovered the communicative potential of dance: Areas in the brain that control the hands and gestures overlap and develop together with the areas that control the mouth and speech. The Broca and Wernicke areas, located in the left hemisphere, are associated with verbal language expression and comprehension, abstract symbolic and analytic functions, sequential information processing, and complex patterns of movement. The process of dance-making engages some of the same components in the brain for conceptualization, creativity, and memory as does verbal poetry or prose, but obviously not the same procedural knowledge. Dance is also linked to the right hemisphere that involves elementary perceptual tasks, nonverbal processing of spatial information, music, and emotional reactivity. However, rigid 907 D 908 D Data Mining lateralization of brain function is precluded by the transfer of inputs to each side of the brain over the corpus callosum, the main body of nerve fibers connecting the two hemispheres. A study of the neural basis of the tango (using MRI and position emission tomography) found an interacting network of brain areas active during the performance of motor sequencing and movement intention. Dance influences the mind causing positive plastic changes in the brain, reorganizing neural pathways, or the way the brain functions, for young and old alike. Physical activity sparks biological changes that encourage brain cells to bind to one another which reflect the brain’s fundamental ability to adapt to challenges. In complex motor movement, the brain fires signals along the network of cognitive functioning cells, which solidifies their connections. Extended learning in dance thus impacts how well the brain processes other tasks. Dance has the cognitive demands of remembering steps and executing them usually in response to music and coordination with a partner or group in space, and creating dances. Mirror neurons in the brain are active in someone carrying out a particular dance movement as well as in someone else who watches the same movement. Greater bilateral activations occur when expert dancers viewed movements that they had been trained to perform compared to other movements. The simulation process of resonance between observed and embodied action could underpin sophisticated mental functions of empathy, sympathetic kinesthesia, and understanding in social interaction. Triangulating knowledge from the arts and humanities, social and behavioral sciences, and cognitive/neurological science, elucidates the power of dance in imaging and learning. Findings related to the other arts may apply to dance because it often occurs in combination with music, written text, poetry, and the visual arts of set design and costume. Cross-References ▶ Aesthetic Learning ▶ Approaches to Learning ▶ Creativity, Problem-Solving and Feeling ▶ Dance Learning: Neurological Correlates of Complex Action ▶ Joyful Learning ▶ Learning Activity ▶ Motor Learning ▶ Play and Learning ▶ Play, Exploration, and Learning ▶ Stress Management References Clegg, M. (2004). Evolution of language: Modern approaches to the evolution of speech and language. General Anthropology, 10(2), 1–11. Goldin-Meadow, S. (2002). Constructing communication by hand. Cognitive Development, 17, 1385–1406. Grove, R., Stevens, C., & McKechnie, S. (Eds.). (2005). Thinking in four dimensions: Creativity and cognition in contemporary dance. Carlton: Melbourne University Press. Hanna, J. L. (1983). The Performer-Audience Connection: Emotion to Metaphor in Dance and Society. Austin: University of Texas Press. Hanna, J. L. (1987). To Dance Is Human: A of Nonverbal Communication (Rev. of 1979). Chicago: University of Chicago Press. Hanna, J. L. (1999). Partnering dance and education: Intelligent moves for changing times. Champaign: Human Kinetics. Hanna, J. L. (2006). Dancing for health: Conquering and preventing stress. Lanham: AltaMira. Hanna, J. L. (2008). A nonverbal language for imagining and learning: Dance education in K-12 curriculum. Educational Researcher, 37(8), 491–506. Data Mining ▶ Learning Algorithms Dealing with Uncertainty ▶ Complex Problem Solving Deciphering ▶ Reading and Learning Decision A choice between two or more alternatives. A central question in the social sciences, and in particular economics, is whether humans make decisions that are Deductive Learning optimal (global rationality) or whether they have to aim only for decisions that are “good-enough” (bounded rationality). Decision Learning ▶ Rapid Response Learning in Amnesia Declarative ▶ Prospective and Retrospective Learning in Mild Alzheimer’s Disease Declarative Knowledge Declarative knowledge is knowledge of facts, concepts, events, and objects. It can typically be directly expressed as a proposition, image, or relational structure. This information is stored in long-term memory and organized into schemas that interconnect to shape comprehension and influence semantic interpretations. D Declarative Pointing ▶ Joint Attention in Humans and Animals D Declarative Showing ▶ Joint Attention in Humans and Animals Decoding ▶ Reading and Learning Deductive Learning TRISTAN CAZENAVE LAMSADE, Université Paris-Dauphine, Paris, Cedex 16, France Synonyms Chunking; Explanation based generalization; Explanation based learning Definition Declarative Learning ▶ Fact Learning ▶ Meaningful Learning in Economic Games Declarative Memory Declarative memory contains memory for facts such as 5 þ 2 = 7 and water expands when it freezes. Cross-References ▶ Explicit and Procedural-Learning-Based Systems of Perceptual Category Learning 909 We can formally define three types of reasoning mechanisms: deduction, induction, and abduction. Let us define the rule: “if we are in the morning then the sun rises,” with the facts “we are in the morning” and “the sun rises.” Deduction consists in asserting the fact “the sun rises” given the rule “if we are in the morning then the sun rises” and the fact “we are in the morning.” Induction consists in asserting the rule “if we are in the morning then the sun rises” from the facts “we are in the morning” and “the sun rises.” Abduction consists in guessing the fact “we are in the morning” from the fact “the sun rises” and the rule “if we are in the morning then the sun rises.” A large number of works in the field of machine learning deal with induction. In this entry we are not dealing with inductive learning but rather with learning from deduction. 910 D Deductive Learning Deduction is an important area of Artificial Intelligence. Many Artificial Intelligence systems rely on deduction to solve problems. This is particularly true for systems written in Prolog. Prolog enables to write systems using first order logic. It provides a built-in deduction mechanism, and a program can be written declaratively as a set of facts and rules. Deduction is also used in expert systems that also rely on fact and rules to solve problems. Backward chaining consists in starting with a goal expression that the system tries to deduce and to find the rules that conclude on this goal expression; these rules contain sets of conditions that have to be true in order to prove the goal expression and these conditions are set as the new goal expressions that the system tries to deduce. On the contrary, forward chaining consists in starting from a set of facts and to deduce new facts applying the rules that match the set of facts (i.e., a rule matches a set of facts if all its conditions match some facts). The main difference between Prolog and usual expert systems is that Prolog uses backward chaining while expert systems usually use forward chaining. Using first order logic to represent rules means that the conditions of the rule can contain variables and not only facts. The example rule of the beginning of the entry about the sun that rises in the morning does not contain variables. A similar rule that may contain a variable could be: “if the variable Hour is greater than 9 and lower than 12 then the sun rises.” The goal of deductive learning is to speed up systems that use deduction in first order logic to solve problems. Deductive learning (Laird et al. 1986; Mitchell et al. 1986; DeJong and Mooney 1986) consists in creating rules that deduce rapidly some facts that have taken time to be deduced by a rule-based system. If the system fires these learned rules directly, he or she may find solutions faster than the original system. The goal of deductive learning is to speed up problem solving. One possible drawback of learning rules this way is that the system can be slower after learning than before learning. This undesired behavior can be caused by the time used to match the learned rules (Minton 1990). To avoid this, an operationality criterion (DeJong and Mooney 1986) can be used to decide which learned rules are kept and which ones are discarded. Deductive learning is particularly efficient in games that have threats (Cazenave 1998). Knowing the rules of a game, a system can find the set of facts that enable one player to win if he or she plays first in a particular position. It can then generate a rule that deduces the game can be won in similar cases where the rule matches. Moreover, since the system knows all the conditions of a rule that enable one player to win, he or she can analyze these conditions and find a small subset of the possible moves for the other player that invalidate one of the conditions. This small subset of moves contains all the relevant moves to prevent the first player to win, it is also established that no other move can prevent the first player to win. Therefore using these rules in a problem solver for games reduces a lot the branching factor of the games and enables to solve problems that would be unsolvable with a brute force approach. Theoretical Background The correctness of deductive learning comes from the equivalence between firing rules with variables and proving theorems in first-order logic. A learned rule is a generalization of a set of facts that have been deduced. The generalization consists in replacing in the conditions of the learned rules the binded variables with true variables. The first-order learned rule is a theorem in the theory formed by the initial set of rules. Therefore, learned rules are consistent with the theory and deduce correct facts provided the initial theory is also correct. Important Scientific Research and Open Questions Deductive learning can be used to find rules to direct the search of a problem solver. It is also related to metaknowledge and to machine consciousness since the system has to observe and analyze its own behavior in order to create new knowledge (Pitrat 2009). Cross-References ▶ Deductive Reasoning and Learning ▶ Inferential Learning and Reasoning ▶ Metacognition and Learning ▶ Problem Solving References Cazenave, T. (1998). Metaprogramming Forced Moves. In ECAI 1998, Brighton, 645–649 DeJong, G., & Mooney, R. J. (1986). Explanation-based learning: An alternative view. Machine Learning, 1(2), 145–176. Laird, J. E., Rosenbloom, P. S., & Newell, A. (1986). Chunking in soar: The anatomy of a general learning mechanism. Machine Learning, 1, 11–46. Deductive Reasoning and Learning Minton, S. (1990). Quantitative results concerning the utility of explanation-based learning. Artificial Intelligence, 42(2–3), 363–391. Mitchell, T. M., Keller, R. M., & Kedar-Cabelli, S. T. (1986). Explanation-based generalization: A unifying view. Machine Learning, 1, 47–80. Pitrat, J. (2009). Artificial beings: The conscience of a conscious machine. London/Hoboken: ISTE/Wiley. ISBN 978-1848211018. Deductive Reasoning and Learning MICHAL AYALON, RUHAMA EVEN Department of Science Teaching, The Weizmann Institute of Science, Rehovot, Israel Synonyms Logical thinking; Logic-based reasoning Definition According to commonly accepted notions, deductive reasoning is the process of inferring conclusions from known information (premises) based on formal logic rules, where conclusions are necessarily derived from the given information and there is no need to validate them by experiments. Deductive reasoning can be contrasted with inductive reasoning, in which premises provide probable, not necessary, evidence for conclusions. Theoretical Background There are several forms of valid deductive argument, for example, modus ponens (If p then q; p; therefore q) and modus tolens (If p then q; not q; therefore not p). Valid deductive arguments preserve truth, in the sense that if the premises are true, then the conclusion is also true. However, the truth (or falsehood) of a conclusion or premises does not imply that an argument is valid (or invalid). In addition, the premises and the conclusion of a valid argument may all be false. Deductive reasoning is used both for constructing arguments and for evaluating arguments. Thus, for example, based on the following information: (1) if a car can be started, then the battery is fine, and (2) the car can be started, then using the modus ponens rule, it is possible to conclude that (3) the battery is fine. However, if the car D cannot be started, and someone claims that the problem lies with the battery, using deductive reasoning to evaluate this claim implies that it is impossible to infer this logically from the given information. Thus, based on deductive reasoning, one cannot know whether the problem lies with the battery or not. Since the early days of Greek philosophical and scientific work, deductive reasoning has been considered as a high (and even the highest) form of human reasoning. Already Aristotle, who laid down the foundations for this kind of thinking in the fourth century, BC, perceived a person who possesses deductive ability as being able to comprehend the Universe in more profound and comprehensive ways. Throughout scientific development, great scientists, such as Descartes and Popper, emphasized the importance of this kind of reasoning to science. Although the scientific process is based to a large extent on inductive reasoning – developing hypotheses based on empirical observations to describe “truths” or “facts” about our world – deduction still plays an important role in science in criticizing and refuting theories, using the modus tolens rule. Deductive reasoning is also viewed as important in technological work and in legal systems, as well as in facilitating wise decision making related to fields such as politics and economy. A discipline which is most identified with deductive reasoning is mathematics. Deductive reasoning is key to work in mathematics, because rigorous logical proof, which is a unique fundamental characteristic of mathematics, is constructed using deductive reasoning. Although there are some other accepted forms of mathematical validation, deductive proof is considered as the preferred tool in the mathematics community for verifying mathematical statements and showing their universality. Important Scientific Research and Open Questions One line of research in the area of deductive reasoning focuses on studying its nature and mechanism. Several psychological theories exist nowadays, providing explanations for people’s reasoning processes on diverse tasks that involve the use of deductive reasoning, including accounts of the factors that influence reasoning on such tasks (e.g., factors that affect typical errors people make). An example for such a theory is the mental models theory (Johnson-Laird and 911 D 912 D Deductive Reasoning and Learning Byrne 1991) which approaches deductive reasoning as based on manipulations of mental models representing situations. Another line of research deals with the question of the extent to which deductive reasoning is useful in daily life. Reports of studies on argumentation from the last decades have challenged the usefulness of deductive reasoning in everyday situations (e.g., Toulmin 1969), claiming that rationality, in the sense of taking the “best” choice out of a set of options, stands in the base of reasoning and communication in everyday activities, and that rationality is not bound to formal logic. Moreover, in an attempt to convince others of the rationality of their claims and choices, people often use various kinds of argument, which mostly do not have the logical rigidity of deductions. Thus, other, lessstrict kinds of inferences, more of the plausible type (e.g., inductive, abductive) are commonly used by people in everyday situations. Still, another debated issue that has long been under discussion relates to the evolution of the ability to reason deductively. Some researchers, such as Piaget (Inhelder and Piaget 1958), claimed that deductive reasoning develops naturally and formal teaching has no significant influence on it. Other researchers, such as Cheng et al. (1986), maintained that teaching plays an essential condition for the development of deductive reasoning. Different opinions also exist regarding the age at which the capability of deductive reasoning can be developed or learnt. Some researchers claim that the ability to think in a logical manner develops or may be learnt at adolscence. Other researchers claim that children, even at the age of preschool and elementary school, are able to use some deductive arguments. The literature that deals with the teaching of deductive reasoning suggests that while in the past there was commonly accepted view that general deduction rules can be taught by formal teaching methods, in the last decades this view became controversial. Until the beginning of the twentieth century, classical education, especially Latin and mathematics, has been believed to be an effective tool for teaching deductive reasoning. More than 2,000 years ago, Plato claimed that geometry was the best tool for training deductive reasoning, and the teaching of deductive reasoning became an integral part of formal mathematics learning. At the beginning of the twentieth century, the common opinion of stakeholders in mathematics education was that the study of geometry could enhance deductive reasoning which would be transferred into reasoning capacity in other domains. An experiment conducted in geometry in the 1930s (Fawcett 1938) supported this view, suggesting an improvement of the participating students’ deductive reasoning. During the twentieth century, however, studies on the transfer of learning have challenged the assumption that learning subjects such as Latin and mathematics has an impact on acquiring general skills of reasoning. These studies stated that transfer of learning is not general but specific to the situations in which learning has occurred. Thus, the goal of teaching deductive reasoning that is transferable to other domains has been undermined, and sometimes even completely pushed aside. Nonetheless, in the practical field, the school environment is still often thought to be the place where mental capabilities relating to deductive reasoning should be developed (e.g., Ayalon and Even 2010). Still, researchers who adopt the evolutionary psychology point of view question both the natural development of deductive reasoning and the possibility to teach it. They contend that a conflict exists between formal deductive reasoning and natural thinking. They suggest that people do not naturally think in logical terms, but instead reason using logics that are different from the formal one. Cross-References ▶ Argumentation and Learning ▶ Deductive Learning ▶ Development and Learning ▶ Inferential Learning and Reasoning References Ayalon, M., & Even, R. (2010). Mathematics educators’ views on mathematics learning and the development of deductive reasoning. International Journal of Science and Mathematics Education, 8, 1131–1154. Cheng, P. W., Holyoak, K. J., Nisbett, R. E., & Oliver, L. (1986). Pragmatic versus syntactic approach to training deductive reasoning. Cognitive Psychology, 18, 293–328. Fawcett, H. P. (1938). The nature of proof. The thirteenth yearbook of the National Council of Teachers of Mathematics. New York: Teachers College. Inhelder, B., & Piaget, J. (1958). The growth of logical thinking from childhood to adolescence. New York: Basic Books. Johnson-Laird, P. N., & Byrne, R. M. J. (1991). Deduction. Hillsdale: Erlbaum. Toulmin, S. E. (1969). The uses of arguments. Cambridge, UK: Cambridge University Press. Deep Approaches to Learning in Higher Education Deductive Schemas ▶ Model-Based Reasoning ▶ Pragmatic Reasoning Schemas Deed ▶ Spatial Cognition in Action (SCA) Deep Approaches to Learning ▶ Deep Approaches to Learning in Higher Education Deep Approaches to Learning in Higher Education MICHAEL JACKSON Department of Government and International Relations, University of Sydney, Sydney, NSW, Australia Synonyms Conceptual change; Conceptual growth; Deep approaches to learning; Open-ended learning; Reproductive learning; SOLO taxonomy; Surface approaches to learning; Transformational learning Definition A deep approach to learning concentrates on the meaning of what is learned. That concentration may involve testing the material against general knowledge, everyday experience, and knowledge from other fields or courses. A student taking a deep approach seeks principles to organize information. In contrast a student using a surface approach tries to capture material in total, rather than understand it. An example is the student who busily copies down a diagram without listening to the explanation of it. The emphasis is on the sign rather than the significance. D 913 Theoretical Background For many years the assumption in higher education was that there is a one-to-one relationship between what an instructor teaches (says) and what a student learns (hears). The only challenge to learning was to get students into the classroom and get them to listen. To meet these needs, effective teaching in higher education was understood to consist of a catalog of tricks and techniques to stimulate attendance (pop quizzes, theatrical presentations, roll calling) and to command attention (calling on silent students, assigning grades to class participation, and so on). A model of education often parodied as the transference of the professor’s lecture notes to the student’s notepad. It was assumed that while teachers were trained to teach in primary and secondary schools, university professors had no need for such training. In the 1980s this complacency was challenged by a number of researchers who empirically examined students and teachers in higher education. They found that the transference of knowledge from teacher to student was uncertain. Two students matched for ability, attending the same class might have quite different reactions and experiences. Why is that? Because these students might process the experience and the material in different ways. One student might scrupulously try to capture every word, but not grasp the themes in the material. This student would be able to recite vast slabs of material, but when asked to sum up its meaning, value, or implications, be at a loss (Schneps 1988). Another student might appear detached in the class room, sitting back, but think about the material in various contexts and as a result grasp some of the topography of the material and evaluate it, though be less able to regurgitate bodies of material, still be able to assess and apply it. These differences were found in studies in Sweden, Scotland, England, Hong Kong, and Australia. The research method at the start was simply to interview students after class sessions and ask them to describe in some detail what they did in the class. That is, what they did or thought as the class proceeded. The focus was not on what the teacher did, though that was part of the story, but what the student did in response or parallel with what the teacher did. When the results of these interviews were compared with similar interviews with teachers many discrepancies appeared. Material that teachers presented to stimulate thought was taken by some students as the final word to be memorized. There was a lack of communication and a mismatch of intentions. D 914 D Deep Approaches to Learning in Higher Education Important Scientific Research and Open Questions This body of research emerged among groups of students who met the same entrance standard at the same institution, taking the same program of study. The point is that the approaches to learning is not a psychological state. None of us are born to take deep approaches, nor to take surface approaches. These two approaches are learned through schooling, and for much schooling a surface approach may be adequate. Nor does taking a deep approach guarantee success. The efforts of some to take deep approaches to real number theory may not suffice. For those, a surface approach might be the best tactic to make the best of the experience. Equally, rote learning is a tool for some purposes, like learning the Latin names of bones, or the conjugations of irregular verbs. But a student will only come to a deeper understanding of material by taking deep approaches. If the purpose of higher education is to impart to students not just bodies of facts but principles that organize and assess knowledge, then deep approaches have a place. Deep approaches to learning can be fostered in a variety of ways. First and foremost is repeatedly to articulate the nature of value of such approaches, and to support and encourage them in visible and practical ways that students value. In this way institutions and instructors can authorize students to test concepts, theories, principles, findings, evidence, and argument they find in the university study against their general knowledge, common sense, prior learning, Internet sources, and the like. Rather than forbidding students to access Wikipedia, fostering a deep approach would accept that it will happen, and counsel students to evaluate and test what they find there. More specifically, course objectives, assignments, teaching methods, and workload can open the door to deep approaches for students. Course objectives refer to the purpose of studying the material. What has the learner gained by studying this subject? In a “History of Political Theory” course, the students will learn about Plato, let us say, but the objective in so doing is to learn to evaluate arguments, to grapple with wisdom from a different world, or to test ideas against current knowledge. The objectives are what is gained when the course is studied, not when it is graded. If teachers want students to be motivated by more than grades, then what that more is has to be explicitly said. That grades motivate students is a truism, but teachers can focus that motivation constructively. One empirical study found that “the majority of students reported greater use of transformational [deep] activities for an open-ended assessment [assignment] than for the closed examinations; and conversely less use of reproductive [surface] activities with the openended assignments than with the short answer and closed examinations” (Bain and Thomas 1984). An essay, a laboratory report these give students more scope to interpret material than true-false tests. Teaching methods underwrite objectives and assignments. Entwistle and Tait (1990) interviewed undergraduate students from more than 60 departments, and found that departments with assignments that placed a premium on factual information and gave students less freedom (and its twin, responsibility) led students to the surface approach to these assignments. In addition, feedback on assignments is another crucial element associated with the approach to learning taken by students in these departments. If the feedback focuses on compliance and facts, the surface approach remained, as it did if there was no feedback apart from the grade. Workload is another crucial determinate of student intention. If the work associated with a course seems excessive, then many students will adopt a surface approach to survive. We all do this when faced with more than we can handle. A professor who presents students with an 80-page syllabus, guarantees that many will immediately take a surface approach to cope. The irony is that some of those professors who are most serious about teaching adopt tactics that discourage students from taking deep approaches. To assign weekly graded assignments all but insures that few students will do their best work on any one of them. One implication of the discussion to this point is that it might be more effective to manage students’ perception of the learning environment than to concentrate on special study skills sessions, essay writing workshops, yet more PowerPoint slides, more selfpaced web material, and the like. There are few technical solutions to human problems. But students’ “perceptions of teaching and assessment methods [assignments] in academic departments are significantly associated with . . . students approaches to studying” (Entwistle and Ramsden 1983). To encourage and support students to take deep approaches Default Reasoning clarify objectives, set assignments that permit deep approaches, use teaching methods that place responsibility on students, and manage workload to challenge but not discourage students. These are among the practical implications of this body of knowledge. Cross-References ▶ Abilities to Learn ▶ Academic Motivation ▶ Design of Learning Environments ▶ Intentional Learning ▶ Learning Objectives ▶ Motivation and Learning: Modern Theories ▶ Perceptions of the Learning Context and Learning Outcomes References ▶ Problem-Based Learning ▶ Rote Memorization ▶ Styles of Learning and Thinking D Deep Learning Approaches They encompass approaches to learning that involve considerable rumination on the part of the student. Deep learning will often include the application of critical thinking skills to devise a solution to a posed problem. Deep approaches are contrasted with techniques that encourage more superficial and often less durable learning, such as the rote memorization of facts. Default Reasoning GERHARD BREWKA Computer Science Department, Institut für Informatik, University of Leipzig, Leipzig, Germany References Bain, J., & Thomas, P. (1984). Contextual dependence of learning approaches. Human learning, 3(4), 230–242. Biggs, J. B. (2003). Teaching for quality learning at university: What the student does (2nd ed.). Phildelphia: Society for Research into Higher Education. Entwistle, N., & Tait, H. (1990). Approaches to learning, evaluations of teaching, and preferences for contrasting academic environments. Higher education, 19(3), 169–199. Harvard-Smithsonian Center for Astrophysics. (M. Schneps, producer). (1994). A private universe: Misconceptions that block learning [DVD]. Further information at http://www.learner.org/ resources/series28.html Marton, F. (Ed.). (1984). The experience of learning. Edinburgh: Scottish Academic Press. Prosser, M., & Trigwell, K. (1999). Understanding learning and teaching: The experience in higher education. Buckingham: Society for Research into Higher Education. Ramsden, P. (1992). Learning to teach in higher education. London: Routledge. Deep Hierarchical Networks ▶ Hierarchical Network Models for Memory and Learning Deep Learning ▶ Learning Hierarchies of Sparse Features 915 Synonyms Reasoning with exceptions Definition Default reasoning is a form of nonmonotonic reasoning where plausible conclusions are inferred based on general rules which may have exceptions (defaults). It is nonmonotonic in the sense that additional information may force us to withdraw earlier conclusions, namely whenever the additional information shows that the case at hand is exceptional. Theoretical Background In classical logic, adding information in the form of additional premises never invalidates any conclusions. Commonsense reasoning is different. We often draw plausible conclusions based on general rules expressing what normally is the case together with the assumption that the world about which we reason is normal and as expected. This is the best we can do in situations in which we have only incomplete information. However, it can happen that our normality assumptions turn out to be wrong. New information can show that the situation actually is abnormal in some respect. In this case, we may have to give up some of our former conclusions. For this reason, the need for logical models of default reasoning was recognized early in Artificial Intelligence. Interest in default reasoning was also D 916 D Default Reasoning fueled by the frame problem: How to represent adequately what does not change when an action occurs. One would like to have action formalisms where it is sufficient to describe the changes caused by an action. Ideally, the persistence of all other properties would be left implicit. This can be achieved by using a default rule like: What holds in a situation normally holds in the situation after performing an action. The frame problem has provided a major impetus to research in default reasoning. Important Scientific Research and Open Questions One of the most important formalizations of default reasoning is Ray Reiter’s default logic (Reiter 1980). In default logic, one has to specify a set W of classical (propositional or first order) formulas which represent what is known to be the case. In addition, default rules are represented in the form P : Q1 ; . . . Qn = R: Here P is a premise which must hold for the rule to be applicable, R is the conclusion. In addition, Q1, . . .Qn are formulas which must be consistent with what is derived (or, equivalently, their negations must not be derived). For instance, the default birds normally fly can be represented as BirdðxÞ : FliesðxÞ=Flies ðxÞ: Thus, in contrast to standard inference rules, Reiter’s default rules allow us to refer in their preconditions not only to what is derived, but also to what is not derived. This has drastic consequences. In particular, there is no longer a single set of theorems as in classical logic. Rather, conflicting default rules may give rise to multiple acceptable sets of formulas, called extensions by Reiter. To qualify as an extension, a set of formulas E must (1) be closed under classical inference, (2) all applicable defaults must have been applied, and (3) if E contains a formula, then this formula must have a noncircular derivation from W together with applicable default rules. Based on the extensions one can then define a skeptical inference relation (F is inferred if it is contained in all extensions) or a credulous inference relation (F is inferred if it is contained in some extension). Important alternative formalizations of default reasoning are circumscription (McCarthy 1980) and autoepistemic logic (Moore 1985). The former is based on the observation that some logical models represent more normal situations than others. McCarthy defines a logical consequence relation which takes into account the most normal models only. Defaults are represented as implications with an explicit abnormality predicate, for example BirdðxÞ^:AbðxÞ ! FliesðxÞ: The most normal models then are those which minimize the abnormal objects. Autoepistemic logic represents consistency of a formula through a modal operator. As in default logic, conflicting default rules may give rise to different acceptable sets of formulas. For a detailed overview of these and other approaches, see (Brewka et al. 2007). One of the key problems with formalizations of default reasoning is their computational complexity. Consistency checking is notoriously expensive. It cannot be done locally and involves the whole knowledge base. The first-order variant of default logic is not even semi-decidable. For this reason, it is important to identify special cases which are expressive enough to be useful in practice, yet restricted enough to allow for efficient computation. With respect to implementations of default reasoning, there now exist highly efficient systems for logic programming under answer set semantics. Logic programming can be viewed as a restricted version of default logic where the formulas in the rules are propositional atoms. Answer set solvers for logic programming, like the clasp system of Potsdam University (see http://www. cs.uni-potsdam.de/clasp/), can handle restricted forms of default reasoning in a highly efficient manner. Cross-References ▶ Abductive Reasoning ▶ Analogical Reasoning ▶ Cognitive Robotics ▶ Deductive Learning ▶ Deductive Reasoning and Learning ▶ Knowledge Representation ▶ Model-Based Reasoning ▶ Schema-Based Reasoning References Brewka, G., Niemelä, I., & Truszczynski, M. (2007). Nonmonotonic reasoning. In V. Lifschitz, B. Porter, & F. van Harmelen (Eds.), Deliberate Practice and Its Role in Expertise Development Handbook of knowledge representation (pp. 239–284). Burlington: Elsevier. McCarthy, J. (1980). Circumscription – a form of non-monotonic reasoning. Artificial Intelligence, 13(1–2), 27–39. Moore, R. C. (1985). Semantical considerations on nonmonotonic logic. Artificial Intelligence, 25(1), 75–94. Reiter, R. (1980). A logic for default reasoning. Artificial Intelligence, 13, 81–132. D Deliberate Practice Framework proposing that the efficient acquisition of domain-related knowledge and skills, and thus exceptional performance, is primarily due to goal-directed, effortful, and inherently not enjoyable practice. Cross-References ▶ Volition for Learning Defaults Defaults are specifications of schemas and parts of a schema that are assumed unless there is specific information to the contrary. Defaults define the prototypical instantiation of a schema. Deliberate Practice and Its Role in Expertise Development FERNAND GOBET Department of Psychology, Brunel University, Uxbridge, Middlesex, UK Definition Schema In computer programming, a “definition schema” is the organization or structure for a database. More specifically, the activity of data modeling leads to a schema. The term is used in discussing both relational databases and objectoriented databases. Two common types of database schemas are the star schema and the snowflake schema. Deformation ▶ Memory Dynamics Degradation ▶ Memory Dynamics Deixis Something where the meaning has to be understood from context. Example: A pointing gesture changes meaning based on what is being pointed at. 917 Synonyms Practice; Training Definition According to the deliberate-practice framework, the way to reach high levels of expertise is to carry out practice that is consciously intended to improve one’s skills. ▶ Deliberate practice involves goal-directed activities, which tend to be repetitive and to enable rapid feedback. Preferably performed individually, these activities tend to be effortful and not enjoyable. They can be carried out just for a few hours a day (but not so often that they become inefficient or even hurtful). This framework limits the role of inherited factors to motivation, general activity levels, and height in sports. No role is given to talent with respect to cognitive abilities. The deliberate-practice framework has been influential in the field of expertise research, and a large number of studies have been conducted to understand the role of practice in areas such as sports, games, the arts, and the professions. There have also been controversies surrounding this framework. Theoretical Background The importance of practice has been recognized for decades, first by proponents of behaviorism and then by psychologists more interested in cognitive D 918 D Deliberate Practice and Its Role in Expertise Development mechanisms (e.g., De Groot 1946/1978). Practice was given particular emphasis in Simon and Chase’s (1973) classic study of expertise in chess, which concluded that grandmaster level can only be obtained after about 10 years of dedicated practice and study. This corresponds to between 10,000 and 100,000 h of hard work. The deliberate-practice framework (Ericsson et al. 1993) has taken this position to its extreme by proposing that innate individual differences do not limit the top levels of performance, but that these can be further increased by dedicated practice. The deliberate-practice framework rejects innate talent as an explanation for cognitive abilities, arguing that the evidence for it is flimsy at best. Rather, it proposes that expert performance is a monotonic function of the amount of practice. Thus, this framework takes the clear and extreme position that deliberate practice is not only a necessary, but also a sufficient condition for expertise. Deliberate practice consists of training activities. The goal is to improve performance by optimizing feedback and thus the correction of errors. These activities are typically effortful and not enjoyable. Thus, it is not sufficient to play the piano just for enjoyment, even if one devotes a considerable number of hours doing so. It is crucial to use training techniques whose deliberate goal is to improve one’s performance. Moreover, these training activities can be carried out for just a few hours a day. Excessive practice increases the risk of injuries and burnout (especially in sports). Another important prerequisite is the presence of a favorable environment, and in particular strong family support. The proponents of deliberate practice do acknowledge the involvement of inherited factors, but these are limited to motivation, general activity levels, and, in some sports, height. Importantly, the involvement of genetic factors is explicitly excluded for explaining individual differences in high levels of cognitive abilities. Finally, emphasis is given to individual practice rather than group practice, as the former increases the efficiency of the activities characteristic of deliberate practice. One of the clearest evidence for the role of deliberate practice (and concomitantly, the lack of involvement of talent) was provided by longitudinal experiments where college students with average memory span were trained in the digit-span task. When they devoted sufficiently effort and practice, these students could perform better than individuals that were thought in the literature to enjoy special, inherited talent. The role of deliberate practice has also received support from many domains, including games, music, science, and medicine (Ericsson et al. 2006). A large amount of data has also been collected in sports such as karate, soccer, hockey, skating, and wrestling. In these studies, participants are typically asked to estimate retrospectively how many hours they have spent in diverse types of activities, and the results are correlated with their skill level. The results typically show that the higher skilled individuals engage more in deliberate practice. However, some studies, while partly supporting the role of deliberate practice, also suggest the importance of other factors. A good example of this is the study carried out by Gobet and Campitelli (2007) with chess players. (The game of chess has the great advantage that there is an official, reliable, and quantitative measure of skill – the Elo rating.) They asked players to estimate the number of hours devoted in their career to a number of activities, including studying chess alone and practicing with others (this comprised playing competitive games). As predicted by deliberate practice, a strong correlation was found between chess skill and the number of hours of individual practice. However, contrary to prediction, an even stronger correlation was found between chess skill and the number of hours of group practice. Another result inconsistent with the deliberate-practice framework was the high level of variability. Some players became masters with relatively few hours of deliberate practice (as low as about 3,000 h), while others needed considerably more time (up to about 24,000 h) – that is a 1:8 ratio. Finally, some players devoted more than 25,000 h to chess study and practice, but failed to become masters. This study also uncovered two results that suggest that other factors in addition to deliberate practice are in play. First, there was a correlation between final skill level and starting age, in that the players starting younger were more likely to become masters. This correlation held even after the contribution of deliberate practice was controlled for. Second, the proportion of mixed-handedness was higher with chess players than in the general population. These results tend to suggest that practice is a necessary, but not sufficient condition for reaching high levels of expertise. Delinquency and Learning Disabilities Important Scientific Research and Open Questions The deliberate-practice framework has generated much research but also much controversy (e.g., Sternberg 1996). For example, there is considerable evidence from the fields of personality and intelligence that there exist large individual differences between individuals, and it is plausible that at least some of these differences might affect the acquisition of high levels of expertise. Similarly, individual differences exist with respect to learning, attention, and working memory. In addition, the research on deliberate practice is mostly correlational and rarely uses control groups (i.e., individuals that tried but failed to be experts), and it is thus difficult to draw conclusions about the causal role of talent and (deliberate) practice. For example, it could be the case that, following self-selection, more gifted individuals remain in the domain and thus log in large numbers of hours of practice. Challenges for the deliberate-practice framework include the development of better methods for estimating the respective contributions of practice and talent, and a more differentiated theoretical account for explaining the large interindividual variability and other factors such as starting age that affect the development of expertise. Cross-References ▶ Chunking Mechanisms and Learning ▶ Development of Expertise ▶ Individual Differences in Learning ▶ Learning in Practice ▶ Learning in the CHREST Cognitive Architecture References De Groot, A. D. (1978). Thought and choice in chess (De Groot, A. Trans.). The Hague: Mouton. (Original work Het denken van den schaker published 1946). Ericsson, K. A., Krampe, R. T. H., & Tesch-Romer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100, 363–406. Ericsson, K. A., Charness, N., Feltovich, P. J., & Hoffman, R. R. (2006). The Cambridge handbook of expertise and expert performance. New York: Cambridge University Press. Gobet, F., & Campitelli, G. (2007). The role of domain-specific practice, handedness and starting age in chess. Developmental Psychology, 43, 159–172. Simon, H. A., & Chase, W. G. (1973). Skill in chess. American Scientist, 61, 393–403. Sternberg, R. J. (1996). Costs of expertise. In K. A. Ericsson (Ed.), The road to excellence (pp. 347–354). Mahwah: Erlbaum. D 919 Delinquency and Learning Disabilities ANNEMAREE CARROLL1, STEPHEN HOUGHTON2 1 School of Education, The University of Queensland, Brisbane, QLD, Australia 2 Graduate School of Education, The University of Western Australia, Nedlands, WA, Australia Synonyms Academic difficulties; Crime; Illegal activity; Offending Definition Juvenile delinquency is not a special education category, rather it is a legal term. Delinquency is defined as participation in illegal behavior by a minor who falls under a statutory age limit. According to Lorion et al. (1987), delinquent behavior refers to a continuum of behaviors that deviate from mainstream social standards in ways that have related, or could result in serious disciplinary or adjudicatory consequences. These behaviors can be socially unacceptable to school authorities (e.g., classroom disruption), illegal and problematic by virtue of the age of the offender (e.g., status offenses such as running away, substance use), and illegal, criminal acts independent of the offender’s age (e.g., assault, arson, robbery, rape). Broadly defined, ▶ learning disabilities (LDs) refer to problems in many academic areas such as reading, comprehension, and mathematical abilities. Recent movement away from a discrepancy model as posited by DSM-IV-TR has resulted in LD being defined as “lower than expected scores on tests of achievement (typically significantly below average) given an individual’s age and educational opportunities” (see Rucklidge et al. 2009). Theoretical Background Throughout history, LD has been perceived as a contributing factor to criminality. As early as 1911, Terman proposed that “No investigator denies the fearful role of mental deficiency in the production of vice, crime, and delinquency. . . Not all criminals are feebleminded but all feeble-minded are at least potential criminals.” This perception was held up until the early twentieth century even though there was an absence (or D 920 D Delinquency and Learning Disabilities at best limited) quantitative data linking criminality and LD. At this time, evidence was based predominantly on observations and clinical reports. The development and introduction of quantitative methods has demonstrated a strong relationship between delinquency and LD (Grigorenko 2006). For example, prevalence rates of LD among delinquent youths have been reported to range from 26% to 75% (Rucklidge et al. 2009). Specifically, the types of LD identified have included reading, writing, spelling and mathematics difficulties, dyslexia, and comprehension. Other studies have identified hearing impairment/ deafness and speech and language impairment. The effects of LD are considerable following the young person’s release from incarceration. Individuals with LD are twice as less likely to engage in schooling or work during the initial 12 months following release and are approximately 2.3 times more likely to be reincarcerated compared to delinquents without LD (Bullis et al. 2004). So what reasons are put forward for this connection? Research has posited three main hypotheses to explain the LD-delinquency relationship. The susceptibility hypothesis, the school failure hypothesis, and the differential treatment hypothesis, all pointed to LD as the single cause of delinquency. Specifically, the susceptibility hypothesis suggested that neurological and intellectual difficulties including language deficits directly contribute to the likelihood that an individual who has LD may become delinquent. In the school failure hypothesis, the individual has a strong desire to achieve but his/her delinquency-prone temperament interacts with feelings of frustration, rejection, and selfcriticalness that leads to school failure, dropping out of school, and thereby elevating risk of delinquency. The differential treatment hypothesis proposed that individuals with LD are detected more easily by the police and their social skills deficiencies contribute to their differential treatment. Some researchers refer to this as the differential arrest hypothesis and then add a further two hypotheses (i.e., adjudication and disposition). In the case of the differential adjudication hypothesis, the poor self-control, irritability, abrasiveness, and inability to respond effectively to questions increase the risk of being adjudicated. For the differential disposition hypothesis, the totality of the reasons in all of the hypotheses creates a greater probability of delinquents with LD being committed to a detention facility. Contemporary research has confirmed that one characteristic endemic to adolescents involved in delinquency is a low level of academic achievement. Moreover, the type of delinquent activity committed can be differentiated according to level of academic achievement (e.g., those who commit more serious aggressive crimes have lower academic achievement scores than those committing property crimes; Grigorenko 2006). Of those who underachieve academically, externalizing problems (e.g., aggression, antisocial behavior, defiance, impulsivity, hyperactivity, and attention modulation) are evident (Willcutt and Pennington 2000). Lower levels of IQ, particularly verbal intelligence along with deficits in executive functioning have also been demonstrated among those with LD who commit delinquent acts. The interaction between a myriad of factors including personal factors (e.g., neurological dysfunction, social relationship difficulties, lack of empathy) and other factors such as family risk factors (e.g., born to teenage mother, parent or guardian criminality/alcoholism, parent psychiatric disorder, father absence, harsh or inconsistent discipline style, child abuse), school risk factors (e.g., poor student–teacher relationships, lack of engagement, school failure, negative school climate), peer risk factors (e.g., association with deviant peers), and community risk factors (e.g., low socioeconomic status, single parent family) multiplies the probability that those with LD will become delinquent (Carroll et al. 2009). Although there are a variety of failures to succeed, school failure is often the first to be experienced. Important Scientific Research and Open Questions Historically, the rates of LD among juvenile offenders have been high although recent studies suggest prevalence rates may in fact be two to three times lower. There are a number of explanations for this. First, the definitions of LD have varied across studies, tending to be broad (e.g., problems in many academic areas), through to specific (e.g., reading comprehension, writing ability). Similarly, contentious issues have surrounded the definition of juvenile delinquency (JD). For example, questions regarding JD being defined as a specific behavior (detected/undetected) or as a court-defined legal status have given rise to concerns regarding the concept of JD and actually who is delinquent. Furthermore, these Delinquency and Learning Disabilities definitions have been applied to young people in different forensic and non-forensic settings, including mainstream schools, special schools, and detention facilities (detained individuals versus convicted individuals), suggesting inconsistent use of the term across different samples. Those in school settings attract the label delinquent as a result of their academic under-performance while those in detention facilities have a delinquent status as a result of their suspected or proven more serious offending. Second, different methods and instruments have been applied for diagnosing LD. For example, DSMIV-TR defined LD according to a discrepancy between IQ and scores on standardized tests of achievement whereas other sources use the low achievement model whereby lower than expected scores on tests of achievement (typically significantly below average) given an individual’s age and educational opportunities (Rucklidge et al. 2009). In cases where categorical thresholds have been imposed on a continuous measure (e.g., estimating rates of reading difficulties), the prevalence rates will vary according to where the threshold was established. Third and inextricably linked to the first two points, has been the failure by some early researchers to control for the many confounding effects (e.g., socioeconomic status, ethnicity, and family criminal history). In cases where detained and non-detained individuals are matched on these confounding variables that artificially inflate rates of LD (e.g., age, sex, SES, and IQ), differences on multiple measures of achievement tend to equalize, or in some cases disappear. Fourth, the overlap between emotional and disruptive behavior disorders (E/DBDs) must be considered because research has shown that E/DBDs tend to be the most prevalent disability among young offenders and that these have been associated with more serious outcomes in young adulthood (Carroll et al. 2009). School graduation incentives are the most costeffective strategies for preventing delinquent behavior (Grigorenko 2006). Therefore, of particular importance is understanding what preventive and intervention strategies are effective for juvenile delinquents with LD. Addressing the frustration and failure these young people experience from the outset of their school experience, may reduce their involvement in early disruptive behavior that escalates to more serious delinquent activities. This may involve the way in D which teachers and principals exercise authority through to the educational interventions that need to take into account the wide range of LDs presenting within JDs. While there is evidence that basic education during detention can reduce reoffending, educators must realize that greater benefits can be achieved through more specialized programs (e.g., self-regulation, problem solving, interpersonal skills) that are specific to the difficulties associated with the everyday performance of young people with LD. In summary, the evidence points to: LD causing delinquency; delinquency causing LD; delinquency and LD being caused by a combination which may include a neurological condition and/or social and/or environmental factors; LD and delinquency developing independently but one is perceived as more severe by others; and LD and delinquency being independent but chance associations occurring in study samples seem to suggest some relationship. There is no doubt about the high rates of LD among young people involved in the juvenile justice system nor the established link between delinquency, schooling experience, and academic achievement. While recent studies have suggested that rates of LD may be lower than first observed, the huge financial implications along with the associated family and personal human costs are compelling reasons for the continuing study of the contribution of LD to delinquency. Cross-References ▶ At-Risk Learners ▶ Development and Learning ▶ Individual Differences ▶ Styles of Engagement in Learning ▶ Vulnerability for Learning Disorders References Bullis, M., Yovanoff, P., & Havel, E. (2004). The importance of getting started right: Further examination of the facility-to-community transition of formally incarcerated youth. Journal of Special Education, 38, 80–94. Carroll, A., Houghton, S., Durkin, K., & Hattie, J. (2009). Adolescent reputations and risk: Developmental trajectories to delinquency. New York: Springer. ISBN 978-0-387-79987-2. Grigorenko, E. L. (2006). Learning disabilities in juvenile offenders. Child and Adolescent Psychiatric Clinics of North America, 15, 353–371. Lorion, R. P., Tolan, P. H., & Wahler, R. G. (1987). Prevention. In H. C. Quay (Ed.), Handbook of juvenile delinquency (pp. 383– 416). Oxford, England: Wiley. 921 D 922 D Delivery Systems Rucklidge, J. J., McLean, A. P., & Bateup, P. (2009). Criminal offending and learning disabilities in New Zealand youth. Does reading comprehension predict recidivism? Crime and Delinquency. doi:10.1177/0011128708330852. Willcutt, E. G., & Pennington, B. F. (2000). Comorbidity of reading disability and attention deficit hyperactivity disorder: Differences by gender and subtype. Journal of Learning Disabilities, 33, 179–191. Derivational Analogy ▶ Analytic Learning Descriptive and Interpretative Learning Analysis Delivery Systems ▶ Qualitative Learning Research ▶ Media and Learning Descriptive Theory of Knowledge Development Democratization of Education ▶ Naturalistic Epistemology ▶ Compulsory Education and Learning Desensitization Deontic Reasoning ▶ Habituation and Sensitization ▶ Pragmatic Reasoning Schemas Dependence Desiderius Erasmus (1466/69–1536) ▶ Contingency in Learning DAMIAN GRACE Department of Government and International Relations, The University of Sydney, Sydney, Australia Depression: Clinical Depression ▶ Goals and Goalsetting: Prevention and Treatment of Depression Depth of Processing The level to which information is processed mentally, with processing depth being determined by associations with existing memory, or cognitive effort. Life Dates Erasmus was born in Rotterdam most probably in 1467 (1466 and 1469 are also possible) to unmarried parents, Roger, a priest, and Margaret, daughter of a physician. Educated at a school that was progressive for the time, Erasmus became an Augustinian canon. He was later dispensed from his duties as a priest to devote himself to philology, scriptural exegesis, and reform in Church and society. His enthusiasm for reform did not extend to Luther’s Reformation. While there were many points of agreement between him and the Reformers, the Desiderius Erasmus (1466/69–1536) harshness of Luther’s approach was foreign to Erasmus’ moderate style of thought and conduct. He died of dysentery in Basel in 1536. Contribution(s) to the Field of Learning Erasmus was the most considerable scholar of northern Renaissance Europe, writing many influential books and leaving an unrivaled collection of correspondence with early sixteenth-century humanists, such as Thomas More and Guillaume Budé. Humanists were devotees of classical literature and the moral values embodied in it. In the fifteenth century, Italian humanists effectively broke from the worldview and methods of scholasticism, which revered Aristotle as “the Philosopher” and placed a premium on logical analysis and argument in education. The humanists, by contrast, advocated the studia humanitatis, a curriculum based on classical literature and the study of grammar, rhetoric, history, and moral philosophy. Humanism in the age of Erasmus flowered just as the printing press was beginning to make its impact on education and learning. The standard method of instruction when Erasmus was at school was rote memorization of texts and relentless drills in styles of argument. The printing press was part of the reform of instruction by making books more accessible, and even Erasmus’ early unpolished productions were greeted enthusiastically. All of Erasmus’ works – even satires such as his most widely read work, The Praise of Folly – were written to enlighten, to educate in some way, but he wrote a string of specifically pedagogical texts on liberal education, composition, and teaching, starting with De ratione studii (On the Method of Study), addressed to the education of young adolescents. This was eventually published in 1512 with De copia rerum ac verborum (Foundations of the Abundant Style), written at the request of John Colet for his St. Paul’s School. The two works belong together, for the latter builds on the foundations of the former. Assuming that the basics of language have already been acquired, the Copia addresses the fluency, propriety, and precision of the writer or speaker and the finding of appropriate language to persuade an audience. It goes on to offer an armory of resources, such as examples, comparisons, and contrasts for the amplification of arguments. The Copia became a cornerstone of sixteenth-century education (Callahan 1978, p. 100). D De ratione studii was also widely influential, especially in the pedagogical works of others which now came from the presses. One such is a remarkable work – “Lily’s Grammar” – written by John Colet’s high master at St Paul’s School, William Lily. This grammar became the standard text for Latin instruction in sixteenth-century England and remained in use into the eighteenth century. Apart from his own works, such collaborations with other humanists ensured that Erasmus’ views became part of the fabric of education in the sixteenth century. His Colloquies, a collection of dialogs on a range of contemporary issues intended to introduce students to colloquial Latin, continued to be used for this purpose even when it had been transformed in later editions into a mature work of literature. More conventional was The Education of a Christian Prince, a work in the tradition of mirror-of-princes writing. The aim of this work was to restrain the customary excesses of princes – taxes and war were prominent among them – by education. This optimism about the moral power of education sprang from a real concern about the alternative: if a prince is not educated well, then the evil tendencies of human nature will warp the prince’s character with disastrous consequences for his kingdom. Erasmus was not a radical educational reformer and he has even been called conservative because of his defense of a traditional curriculum in the classics (Thompson 1978, p. xx; Sowards 1988, p. 124). This is true up to a point, but it is important to appreciate the central role of Erasmus in the defense of the classical curriculum against scholastic teachers indifferent alike to good Latin and good letters – especially the pagan authors. Greek works, in particular, excited opposition from those (mainly in the religious orders) who wished to keep Christian studies free of their taint. The humanists tended to overstate this hostility, but there is no doubt that Erasmus brought a deep detestation for scholastic methods and Aristotelian philosophy to his defense of good letters. This did not make him less of a Christian. His humanism was Christianized and authors like Plato were viewed through Christian assumptions. Moreover, Erasmus did not believe that pagan texts inculcated virtue directly, but that they set the Christian student “going in that direction” (Thompson 1978, p. xxx). If Erasmus was more a defender than reformer of the curriculum, he was conspicuous as an advocate of teaching reform, especially in his younger years when 923 D 924 D Desiderius Erasmus (1466/69–1536) he was optimistic that education could drive reform in religion and society. At a time when teaching tended to be regarded as an occupation fit only for those with no other prospects (Tracy 1972, p. 64), Erasmus promoted the idea of the good teacher as the maker of good citizens. In 1516, he encouraged a despondent schoolmaster, Johannes Sapidus, by characterizing teaching as a singular service to society. There is no greater contribution that a learned and honorable man can make to his country than shaping “its unformed young people” he declared (Sowards 1988, p. 124). Important Scientific Research and Open Questions Innovation for Erasmus lay in teaching the classics with an understanding that they cultivated character, morality, and civility. It was important for teachers to strengthen and not harm with punishments those naturally possessed of a humane or gentle disposition. As for pupils of a more passionate nature, he doubted that they could be taught much at all, so punishments would probably fail with them too (Tracy 1972, p. 11). These precepts were not entirely novel in Erasmus’ day for they mirror those of Quintilian in the Institutes (Book 1, Chap. 3, 14), but he had to defend them against those with a scholastic view of education and a penchant for invective and the rod. It might also seem unremarkable that, in De pueris Instituendis (On the Education of Children), he advocated education from a very early age and the use of pictures to instruct young children in matters – such as exotic animals – that they have not experienced. Erasmus recommends using various playful devices such as toy letters of the alphabet to encourage learning and the use of similes and analogies so that young children may work from their knowledge of familiar objects to an understanding of unfamiliar ones. Thus should they acquire a knowledge of language and a knowledge of things simultaneously, but also some intimation that words are not univocally related to things. Older students should, according to De ratione studii, be instructed in the practice of discourse, but this practice is to be rooted in usage, not in rules of grammar learned by heart. Even though the stress for older students has moved to a knowledge of language as a path to the knowledge of things, Erasmus insists that that knowledge comes from best usage, not from definitions (Margolin 1978, p. 226 f.). The best usage was to be found in Cicero, but he was not to be followed slavishly, as Erasmus makes clear. For this Ciceronian was also the author of the most famous satire of Cicero-worship, the Ciceronianus of 1528. The title of Erasmus’ late work, De civilitate morum puerilium libellus (A Handbook on Good Manners for Children), published first in 1530, describes its contents. It is a manual of good manners ranging from personal hygiene to conduct at play and etiquette in company. It aims to educate the child in civility and the acquisition of the self-discipline necessary to become an adult. It is not so much a handbook of etiquette as a way for the child to develop an appropriate disposition. Erasmus believed that good manners had moral, as well as civic value, and sowed in children the seeds of piety, the love of learning, and duty. As the Reformation gathered pace and his hopes for moderate reform became remote, Erasmus became more pessimistic about the power of education to effect change (Tracy 1972, p. 232). Certainly the humanitas he valued receded in a climate of dispute, but it did not disappear. Erasmus continued to believe in a conversational rather than a dogmatic style of instruction; in the power of an appealing argument to be more convincing than a logically correct but unfeeling one; in the futility of placing too much trust in the powers of reason to clarify the human condition. Erasmus had learnt from Cicero the wisdom of examining both sides of an argument and then, perhaps, suspending judgment. He did not waver from his view that bombastic language of any kind – scholastic, Lutheran, pseudo-Ciceronian – was less likely to engage an opponent than the conversational style he recommended in the range of his works and exemplified as well as he could in his life. Cross-References ▶ Epistemology and Learning in Medieval Philosophy ▶ History of the Science of Learning References Callahan, V. W. (1978). The De Copia: The bounteous horn. In R. L. DeMolen (Ed.), Essays on the works of Erasmus. New Haven and London: Yale University Press. Margolin, J. C. (1978). The method of “words and things” in Erasmus’s De Pueris Instituendis (1529) and Comenius’s Orbis sensualium pictus (1658). In R. L. DeMolen (Ed.), Essays on the works of Erasmus. New Haven/London: Yale University Press. Design Experiments Sowards, J. K. (1988). Erasmus as a practical educational reformer. In J. S. Weiland, W. Frijoff, (Eds.), Erasmus of Rotterdam: The man and the scholar: Proceedings of the symposium held at the Erasmus University, Rotterdam, November 9–11, 1986. Leiden: E.J. Brill. Thompson, C. R. (1978). Introduction. In Literary and educational writings. Collected works of Erasmus (Vol. 23). Toronto: University of Toronto Press. Tracy, J. D. (1972). Erasmus: The growth of a mind. Genève: Librairie Droz. Design Experiments NORBERT M. SEEL Department of Education, University of Freiburg, Freiburg, Germany Synonyms Design-based research Definition Design experiments can be considered as a special case of field experiments for improving external validity. Design experiments aim at particular forms of educational interventions that create novel conditions for learning and instruction. They can be used for replication studies, and thus contribute to causal inferences and a gradual increase in the ecological validity of experimental designs in the area of instruction. Theoretical Background The idea of design experiments has its origins in criticism of instructional research and its applicability in educational practice. According to Reeves (2006), the quality of published research in the field of instructional research is generally poor, and Stokes (1997) argues that instructional research contributes little to the basic theories that underpin teaching and therefore has little value for solving practical problems. Both authors reiterate a comment made by Cronbach in 1975 about instructional research in general. To bridge the gap between instructional research and practice, Brown (1992) introduced the concept of design experiments into the methodological discussion. Design experiments have their roots in traditional experimental and quasi-experimental research on learning and instruction, but go beyond the laboratory. Design D experiments are developed explicitly as a means of formative research for testing and refining educational problems, solutions, and methods. Referring explicitly to experimental research in education, Brown (1992) focused on the central question of how to transfer the results of this research into the classroom. She envisioned a dynamic relationship between the classroom and laboratory research capable of bridging the gap between theory and practice. In terms of traditional (quasi-) experimental research, this question corresponds to the issue of external validity of experimentation. Brown’s central idea is developing theories of learning and instruction that work in practice. In order to create effective conditions for learning in the classroom, learning experiences should be engineered with an eye to the application of innovative techniques of learning and instruction. Therefore, engineering a working (or learning) environment can be seen as the heart of design experiments. Engineering a working environment presupposes the realization of a particular learning theory and in turn contributes to the maintenance of this theory. Other constituents of design experiments have been illustrated by Brown (1992), as shown in Fig. 1. Constituent 1: Contributions to Learning Theory The fundamental basis of design experiments is a strong mutual relationship between the engineered working (learning) environment and its “contribution to learning theory.” On the one hand, a learning theory is the fundamental basis for the design of a learning environment, which may be considered as a realization of the central assumptions of the learning theory. On the other hand, the designed environment can make a strong contribution to the validation of the learning theory. In order to illustrate this constituent of design experiments, we can refer to examples from instructional research in the past 25 years, such as “anchored instruction” (Pellegrino and Brophy 2008), “goal-based scenarios” (Schank et al. 1993/94), and “model-based learning and instruction” (Seel 2003). Constituent 2: Practical Feasibility and Dissemination A central argument concerning the usability of design experiments is their feasibility and dissemination. 925 D 926 D Design Experiments Input Classroom ethos teacher/student as researcher curriculum technology etc. Contributions to learning theory Engineering a working environment Output Assessment of the right things accountability Practical feasibility (Dissemination) Design Experiments. Fig. 1 The constituents of design experiments as described by Brown (1992, p. 142) Again, we can refer to the aforementioned examples of anchored instruction and goal-based scenarios. Indeed, the Cognition and Technology Group at Vanderbilt demonstrated over a period of 15 years with both the Jasper Woodbury project and STAR-Legacy that it is possible to transform instructional theory into educational practice (Pellegrino and Brophy 2008). These examples demonstrate that good theories and their instructional realizations have a real chance of being situated in the classroom as well. Constituent 3: Input Variables Input variables of design experiments correspond to a large extent to the independent variables of instructional experimentation. Among other things, Brown (1992) focused on class ethos, curriculum, and technology as important input variables. Beyond these variables, a comprehensive meta-analysis by Wang et al. (1993) identified learner characteristics (i.e., personality traits and learning styles) as the most influential factor of instructional effectiveness. The second major factor influencing effective instruction has been identified as the quality of instructional practice. However, Brown (1992) does not mention individual differences in personality traits and learning styles as input variables. Rather, she emphasizes the idea that teachers and students could act as “researchers.” This idea corresponds to the approach of model-based learning and instruction as well as to ▶ Learning by Design. Constituent 4: Output Variables Brown (1992) argued for measuring the “right things,” such as critical thinking, explorative learning, and problem solving. Although this may be a complicated task, it is necessary to invest more time and effort in the development and validation of appropriate instruments for the assessment of learning outcomes because the quality of measurement and assessment determines the quality of research and its contribution to theory and practice (Pellegrino et al. 2001). In this context, the phenomenon known as testing effect should be mentioned. It states, for instance, that taking a memory test enhances later retention. Several studies lead to the conclusion that “testing is a powerful means of improving learning, not just assessing it” (Roediger and Karpicke 2006, p. 249). Important Scientific Research and Open Questions Developing and disseminating a new research methodology is a slow process that more often proceeds stepby-step than by sudden decisive changes of previous paradigms. This general observation also holds true with regard to design experiments, which compete against the paradigms of both quasi-experimental and action research on learning and instruction. In addition, due to the fact that Brown did not describe design experiments in a systematic manner but rather in a personal narrative style, the follow-up approaches of Design Experiments design-based research focus on divergent aspects of design experiments. Thus, we can observe that the aspect of systematic experimentation is often ignored and action research is considered as the most relevant component of educational research. However, Brown’s statements indicate that she did not contrast design experiments and quasi-experimental studies but rather viewed them as two sides of the same coin: Experimental and quasi-experimental research aims at optimizing the internal validity of research, whereas design experiments aim at attaining a greater ecological validity of educational interventions. Proponents of design experiments and designbased research see their origins in applied research designed to solve practical problems. Actually, design experiments aim explicitly at testing theories about how changes in knowledge and skills take place as a result of teaching (in the classroom). At first glance, it seems that design experiments test hypotheses in a manner that is similar but not identical to traditional experimental research. However, traditional experimental research follows the falsification principle, whereas design experiments aim at “conjecture-driven tests” (Cobb et al. 2003) that correspond to the realization principle of constructivism. This principle states that reality is not mapped onto theories, but rather particular realities are intentionally selected and created by experiments in order to confirm the theory (Seel 2009). With regard to the intended systematic experimentation in educational fields of interest, design experiments correspond to the methodology of field experiments aiming at investigating the effectiveness of educational interventions. Brown clearly defines design experiments as “intervention research designed to inform practice . . . that should be able to migrate from the experimental classroom to average classrooms operated by and for average students and teachers, supported by realistic technological and personal support” (p. 143). Whereas experimental studies in the laboratory are regularly very artificial and their results often lack applicability to real world problems, experimental field studies take place in this real world and generalization is therefore not a problem. Therefore, field experiments are said to have high external or ecological validity (Cook and Campbell 1979). Like laboratory experiments, field experiments generally randomize the sampling of subjects and their D assignment into treatment and control groups and compare outcomes between these groups. However, randomization is typically unavailable in field settings because the researcher is not able to manipulate treatment conditions at the level of the individual participant. In addition, it is difficult to control the relevant variables in a field experiment, while we can control these variables rigorously in a laboratory experiment. Nevertheless, the systematic control of variables is a hallmark of the laboratory experiment, another being the manipulation of independent variable(s). In fact, from a methodological perspective, the lack of control over certain elements is the greatest problem with field studies and design experiments. For example, in free learning environments, the researcher has little or no control over information sources the participants may be using, and even though participants may be advised to avoid this, there is no guarantee that they will conform. Another problem with field studies and design experiments is that they tend to be more time consuming and therefore more expensive and demanding than laboratory studies. Design experiments, and especially the related approaches of design-based research, also refer to action research as a reflective process of progressive problem solving conducted by individuals working with others in teams or as part of a “community of practice” to improve the way they address issues and solve problems. Although Brown (1992) and other proponents of design experiments do not refer to the work of Lewin, who coined the term “action research” in the 1940s, there are obvious parallels in the major lines of argumentation. In 1946, Lewin described action research as “a spiral of steps, each of which is composed of a circle of planning, action, and fact-finding about the result of the action.” Its practice can best be characterized as research for social management or social engineering. Nevertheless, inspired by several concrete examples provided by Brown, practitioners often associate design experiments with qualitative research methods (see, for example, Botha et al. 2005). However, Brown emphasized the application of both methodologies: A “traditional use of static pretests and posttests, combined with appropriate control data, provides us with clear evidence of the effectiveness of the intervention and is easy to share with school personnel as well as fellow scientists” (p. 157). In sum, Brown’s 927 D 928 D Design of Computer Games for Education argumentation amounts to an appeal for combining qualitative and quantitative methodologies. Design experiments correspond, to some extent, to the methodology of matched sampling for deriving causal effects. Matched sampling is often used to help assess the causal effect of some intervention or exposure, typically when randomized experiments are not available or cannot be conducted. Matched samples can arise in a situation in which the same variable or attribute is measured twice on each subject under different circumstances. In accordance with this methodology, Seel (2009) suggested using design experiments as a heuristic for both designing synthetic learning environments and conducting systematic research on them. More specifically, he argues that design experiments can be used for replication studies and thus contribute to causal inferences and ecological validity. The methodology of design experiments may provide an excellent framework for replication studies. Although the traditional view of replication entails the collection of new data – including data on additional cases or additional measures – statisticians and social scientists have suggested alternative replication strategies. One is to build replication into a study from the start. For example, a researcher can draw a sample large enough to allow random partitioning into two subsamples. Data from one subsample can then be used to check conclusions drawn on the basis of analyses of data from the other. Another approach requiring the intensive use of computing resources is to draw multiple random subsamples from already collected data and then use them to cross-validate results. Still another elaboration of the basic idea of replication is the general approach called meta-analysis. Cross-References ▶ Action Research on Learning ▶ Effects of Testing on Learning ▶ Experimental and Quasi-Experimental Designs for Research on Learning ▶ Field Experiments in Learning Research ▶ Field Research on Learning ▶ Learning by Design ▶ Longitudinal Learning Research ▶ Methodologies of Learning Research: Overview ▶ Teaching Experiments and Professional Learning References Botha, J., van der Westhuizen, D., & De Swardt, E. (2005). Towards appropriate methodologies to research interactive learning: Using a design experiment to assess a learning programme for complex thinking. International Journal of Education and Development using Information and Communication Technology, 1(2), 105–117. Brown, A. L. (1992). Design experiments: Theoretical and methodological challenges in creating complex interventions in classroom settings. The Journal of the Learning Sciences, 2(2), 141–178. Cobb, P., Confrey, J., DiSessa, A., Lehrer, R., & Schauble, L. (2003). Design experiments in educational research. Educational Researcher, 32, 9–13. Cook, T. D., & Campbell, D. T. (1979). Quasi-experimentation: Design and analysis issues for field settings. Chicago: Rand MacNally. Pellegrino, J. W., & Brophy, S. (2008). From cognitive theory to instructional practice. Technology and the evolution of anchored instruction. In D. Ifenthaler, P. Pirnay-Dummer, & J. M. Spector (Eds.), Understanding models for learning and instruction. Essays in honor of Norbert M. Seel (pp. 277–303). New York: Springer. Pellegrino, J., Chudowsky, N., & Glaser, R. (Eds.). (2001). Knowing what students know. The science and design of educational assessment. Washington, DC: National Academy Press. Reeves, T. C. (2006). Design research from the technology perspective. In J. V. Akker, K. Gravemeijer, S. McKenney, & N. Nieveen (Eds.), Educational design research (pp. 86–109). London: Routledge. Roediger, H. L., III, & Karpicke, J. D. (2006). Test-enhanced learning. Taking memory tests improves long-term retention. Psychological Science, 17(3), 249–255. Schank, R. C., Fano, A., Bell, B., & Jona, M. (1993/94). The design of goal-based scenarios. The Journal of the Learning Sciences, 3(4), 305–345. Seel, N. M. (2003). Model-centered learning and instruction. Technology, Instruction, Cognition, and Learning, 1(1), 59–85. Seel, N. M. (2009). Bonjour Tristesse: Why don’t we research as we have been taught? Methodological considerations on simulationbased modelling. Technology, Instruction, Cognition and Learning, 6(3), 151–176. Stokes, D. E. (1997). Pasteur’s quadrant – basic science and technological innovation. Washington, DC: Brookings Institution Press. Wang, M. C., Haertel, G. D., & Walberg, H. J. (1993). Toward a knowledge base for school learning. Review of Educational Research, 63(3), 249–294. Design of Computer Games for Education ▶ Designing Educational Computer Games Design of Learning Environments Design of Learning Environments DIRK IFENTHALER Institut für Erziehungswissenschaft, Albert-Ludwigs-Universität Freiburg, Freiburg, Germany Synonyms Instructional design; Instructional systems design; Instructional technology Definition The design of learning environments is the systematic analysis, planning, development, implementation, and evaluation of physical or virtual settings in which learning takes place. Theoretical Background Learning environments are physical or virtual settings in which learning takes place. Learning theory provides the fundament for the design of learning environments. However, there is no simple recipe for designing learning environments (Bransford et al. 2000). Additionally, the design of learning environments will always change in alignment with the change of educational goals. Hence, the design of learning environments in the 1800s or 1900s was extremely different to the twentyfirst century design of learning environments. Generally, the design of learning environments includes the three simple questions: What is taught? How is it taught? How is it assessed? Yet, the design of learning environments is not simply asking the above stated three questions. Rather, it includes a systematic analysis, planning, development, implementation, and evaluation phases (Gagné 1965; Merrill 2007). The analysis phase includes needs analysis, subject matter content analysis, and job or task analysis. The design phase includes the planning for the arrangement of the content of the instruction. The development phase results in the tasks and materials that are ready for instruction. The implementation phase includes the scheduling of instruction, training of instructors, preparing time tables, and preparing evaluation parts. The evaluation phase includes various forms of formative and D summative assessments. The above described model for the design of learning environments presents a general heuristic. However, it is also often criticized being too narrow and inflexible (Dijkstra and Leemkuil 2008). Bransford and colleagues (2000) differentiate four perspectives for the design of learning environments: learner-centered, knowledge-centered, assessment-centered, and community-centered learning environments. The design of learner-centered learning environments needs to take notice of learner’s knowledge, skills, attitudes, beliefs, and cultural practices as well as including instructors and/or virtual tutors who are aware of the learners’ characteristics (see Bransford et al. 2000). The design of knowledge-centered learning environments highlights the prior knowledge of learners. Accordingly, learners’ preconceptions about a specific phenomenon in question are vitally important. Additionally, the design of knowledge-centered learning environments includes authentic problem situations for learners. The design of assessmentcentered learning environments aims at combining assessment of content knowledge and necessary skills for specific tasks or problems. Providing feedback is the main objective for designing assessment-centered learning environments. Feedback could be any type of information provided to learners. Moreover, feedback is considered a fundamental component for supporting and regulating learning processes. The nature of feedback plays a critical role in learning and instruction especially in technology-based and self-regulated learning environments. The design of communitycentered learning environments combines several aspects of community, including classrooms, schools, universities, workplaces, homes, cities, states, countries, and the virtual world. Hence, the sense of community is involved in the design of communitycentered learning environments where instructors and learners share their understanding of norms and values. Current, the design of learning environments involves the idea of using computers for logical reasoning and intelligent agents – an old dream of artificial intelligence: Such applications will be designed to execute operations of logical thinking using a multitude of rules which express logical relationships between terms and data in the Web. In view of the countless unfulfilled promises of artificial intelligence in the 1980s and 1990s, however, one 929 D 930 D Design of Learning Environments would be well advised to remain skeptical on this point. Nevertheless, the integration of artificial intelligence into the Internet is a goal of the development of Web 3.0. Should this vision actually, as has been prophesied, be realized within the next 10 years, it would bring about another Internet revolution and open up new horizons for the design of learning environments. The first step in this process might be to enable users of the Web to modify learning environments and information resources and create their own structures. In this way, Web 3.0 could provide the basis for free or personal learning environments, which have been regarded by educational theorists as the quintessential form of learning environment for decades (Morris 2011). Accordingly, the definition of the design of learning environments needs to be adapted over and over again to the current results of instructional research and technological progressions. Important Scientific Research and Open Questions Versatile research has been conducted in the field of learning and instruction which motivated instructional designers to redefine the principles of teaching and learning (Bransford et al. 2000). However, the days of preprogrammed learning environments are numbered, in which the learner – as in the classical paradigm of programmed instruction – is viewed more as an audience than as an active constructor. In the near future, learners will be the constructors of their own environments and create the structures of the content units on their own (Morris 2011). Web-based systems designed to optimize or supplement learning environments are cropping up everywhere. The rapid pace of these technological developments makes it nearly impossible to integrate them into comprehensive systems. Therefore, so-called personal learning systems (PLS) are being designed to enable learners to select various Web applications individually to meet specific learning goals (Ifenthaler 2010; Seel and Ifenthaler 2009). The requirements and features for deigning PLS are: Portal: Rather than an isolated island, a PLS is an open portal to the Internet which is connected with various applications and collects and structures information from other sources. The content can be created by both learners and teachers using simple authoring tools. Potential for integration: Information is offered in standard formats which learners can subscribe to and synchronize with their desktop applications. In this way, the learning environment is integrated into the user’s daily working environment and connected to it. Neutrality of tools: Tasks in the learning environment are designed in such a way that the learners themselves can choose which application they wish to use to work on them. The portal can make recommendations and provide support. The media competence acquired in this manner can also be useful in everyday life. Symbiosis: Instead of creating new spaces, a PLS uses existing resources. The portal works with existing free social networks, wikis, blogs, etc. All in all, personal learning systems require increased personal responsibility, both from the learner and from the instructor. At the same time, however, they offer more freedom for individual learning. Yet, no empirical studies are available which account for the efficiency of PLS. Hence, much research is needed in near future to investigate the strength and weaknesses of these newly designed learning environments. It is of course difficult to predict new developments or trends in the domain of the design of learning environments with any kind of precision, but one thing is certain: They will continue to be dictated to a great extent by the general development of information and communication technology. Cross-References ▶ Blended Learning ▶ Computer-Based Learning ▶ e-Learning ▶ Interactive Learning Environments ▶ Learning Environment ▶ Online Learning References Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds.). (2000). How people learn: Brain, mind, experience, and school. Washington, DC: National Academy Press. Dijkstra, S., & Leemkuil, H. (2008). Developments in the design of instruction. From simple models to complex electronic learning environments. In D. Ifenthaler, P. Pirnay-Dummer, & J. M. Spector (Eds.), Understanding models for learning and instruction. Essays in honor of Norbert M. Seel (pp. 189–210). New York: Springer. Designing Educational Computer Games Gagné, R. M. (1965). The conditions of learning. New York: Holt, Rinehart, and Winston. Ifenthaler, D. (2010). Learning and instruction in the digital age. In J. M. Spector, D. Ifenthaler, P. Isaı́as, Kinshuk, & D. G. Sampson (Eds.), Learning and instruction in the digital age: Making a difference through cognitive approaches, technology-facilitated collaboration and assessment, and personalized communications (pp. 3–10). New York: Springer. Merrill, M. D. (2007). The future of instructional design: The proper study of instructional design. In R. A. Reiser & J. V. Dempsey (Eds.), Trends and issues in instructional design and technology (pp. 336–341). Upper Saddle River: Pearson Education, Inc. Morris, R. D. (2011). Web 3.0: Implications for online learning. TechTrends, 55(1), 42–46. Seel, N. M., & Ifenthaler, D. (2009). Online lernen und lehren. München: Ernst Reinhardt. Design-Based Research ▶ Design Experiments Designing Educational Computer Games CYRIL BROM1, VIT SISLER2 1 Department of Software and Computer Science Education, Charles University in Prague, Prague 1, Czech Republic 2 Institute of Information Studies and Librarianship, Charles University in Prague, Prague 5, Czech Republic Synonyms Design of computer games for education Definition The design of any product is one part of the product’s development process. The design of an educational computer game (ECG) is a specific extension of ▶ instructional design in that it employs a computer game as a means for accomplishing part of the educational objective. Conventional instructional design is a process of (1) identifying learners’ needs; (2) identifying pertinent useful knowledge or skills; (3) analyzing possible means of achieving this knowledge or skills, D considering the environmental context; (4) designing an appropriate educational process based on the results of this analysis; and (5) specifying assessments of the target knowledge or skills. In ECG design, possible games or game genres suitable for achieving defined learning objectives are analyzed in Step 3, a particular game is designed in Step 4, and possibly also considered as a means in Step 5. Furthermore, designing the particular educational game has to follow general game design principles. Therefore – on a conceptual level – rules, game play and content of the game are specified; and on a technological level, software architecture and the game’s main algorithms are described. On the conceptual level, the goal of successful game design is the creation of engaging and meaningful ▶ play; on the technological level, it is the creation of functioning and smoothly operating software. The exact ways of verifying the game (i.e., checking whether the software works) and its validation (i.e., determining whether the game achieves its intended purpose, which is to help fulfill the educational objective) should also be specified in the design phase. Depending on the educational objectives, supplementary non-gaming teaching activities must also be designed, usually in Steps 3 and 4. Theoretical Background The field of educational computer games is very young, and most research questions have not yet been answered, including the general question how to design effective ECGs. However, synthesis from literature on general instructional design (e.g., Reif 2008), game design (Salen and Zimmerman 2005), recent findings on ECGs (e.g., Sisler and Brom 2008; Klopfer 2008), and reviews of past research on ▶ educational games and ▶ simulations, including non-computer ones (e.g., Hays 2005; Gredler 2004), starts to reveal the following design framework: 1. A target group of learners and teachers should be identified in advance, and the educational context (e.g., requirements posed by the duration of fixed lessons in secondary education) and accepted educational practice (e.g., national curricula) should be known. 2. Learning objectives must be identified, clearly formulated, and set out in accordance with accepted educational practice before development starts. Learning objectives should be formulated in 931 D 932 D Designing Educational Computer Games operational terms, i.e., in terms of concrete, observable abilities a student will acquire after playing the game and conducting supplementary nongame activities. In other words, it should be known what the student will be able to actually do after finishing the game. For instance, with respect to a second language acquisition, it should be specified that students should be able “to translate texts of a particular complexity,” as opposed to stating that they will “understand 500 particular words.” 3. After learning objectives are formulated, a detailed analysis of trade-offs among alternative approaches should be conducted before the ECG approach is possibly chosen. The game genre must be chosen to meet the educational goals and to fit the subject. Will the game be a multiplayer one? Will it be an adventure-style game or a puzzle? The important design choice is whether the learning experience will be delivered by the game directly, or whether the learning materials and the game will be two separate systems and the game will be played merely as a reward. The latter model is an example of behavioristic “▶ drill and practice” approach. While it is often criticized in ECG literature, it may actually work well for acquisition of facts, e.g., foreign-word rehearsal. Yet, it seems that for generating a deeper understanding of certain key principles of given topics and for acquisition of high-level skills, complex games and simulations which allow open-ended exploration and integrate learning materials with the play are more useful. Nevertheless, it has to be emphasized that the empirical research on effectiveness of both types of games is far from conclusive. 4. The development of any computer game is an interdisciplinary task. Game designers, graphics, programmers, testers, etc., must be involved. Additionally, when developing an ECG, educationalists, content developers, teachers, writers of supplementary educational texts, etc., must be in the team. The involvement of teachers in the early design phases is of critical importance. 5. On the conceptual level, the goal of successful game design is the creation of meaningful play. Meaningful play in a game emerges from a clear relationship between player action and game’s system outcome. The result of the game action has to 6. 7. 8. 9. be communicated to the player in a perceivable way. Simultaneously, the relationship between action and outcome should be integrated into the game’s larger context: i.e., the players should know what outcomes their actions will result in as well as which actions to take in order to win the game. Tentative game rules and game play must be designed before the software development starts, and a prototype of the game should be created. This prototype should be evaluated on target users and the game specification should be further refined accordingly. Similarly, game content (e.g., graphics, supplementary texts, etc.) should be evaluated as soon as possible in the development phase and adjusted accordingly. Continuous reevaluations are vital. The specifics of the target group must be considered; e.g., in schools, an ECG will usually have to be appealing to both males and females. The game should be challenging enough but not too hard or complex to play since the latter might result in students’ frustration. The dynamics of the game should be simultaneously understandable by the target group of players and engaging at the same time. Winning should be based on knowledge or skills, not random factors. Sanctioning students for making wrong choices must be balanced. In multiplayer games, “the rich get richer while the poor get poorer” principle should be avoided. The complexity of the game’s user interface must be adjusted to non-players (these may be both teachers as well as learners). Ideally, the players should become gradually familiar with the game, its rules and interface directly through playing the game, rather than being forced to resort to manuals or tutorials. In other words, the game has to be integrated with instructional objectives and other instructions. Technological and personal limitations should be considered. For instance, will the game work on the school hardware? Would the fixed lessons’ durations pose barriers to their usage? If any of these questions poses a problem, alternative solutions should be considered. For example, it is possible to design a game as a supplementary activity for home play or to consider PDA Designing Educational Computer Games technology. Client-server applications may help with removing the need for end-user installations. It might be necessary to simplify the graphic content to fit the graphic cards and internet connection available, etc. 10. No educational game can be viewed as a standalone activity. Supplementary collaborative activities, educational materials, and teaching methodologies need to be prepared in advance but in cooperation with end-users. Further improvements in the course of development by means of evaluation studies are vital. 11. Teachers, and often also students, should know the educational objectives in advance. Their experience with the game should be visibly relevant to these objectives. Research has shown that learners should often be provided with debriefings and feedback that explain how their experiences with the ECG help them to fulfill these objectives. Simultaneously, instructional courses and longterm technical and methodological support for teachers are essential. Practical case studies have highlighted the necessity of long-term programs facilitating “teaching of the teachers” for successful implementation of ECGs on a broader scale. 12. The exact ways of verifying and validating the game should be specified early in the design phase. The evaluation of the game’s learning effect is crucial. Yet preparation of such evaluation and its methodology poses some difficulties given the relative novelty of the ECG field. Typically, such ▶ evaluation should employ qualitative as well as quantitative measures. In the early phases of evaluation, it is necessary to answer the question: has the game been accepted by the target audience at all and have technical problems been eliminated? Ultimately, it is often an advantage to run a controlled study in which the experimental group playing the game is compared to one or more control groups receiving other forms of instruction. Such controlled study must be designed based on hypotheses the development team aims to support or refute (e.g., will the knowledge acquired be retained longer by the experimental group?). A study often requires various instruments, including educational materials used by the test group instead of the ECG; pretests, posttests, and delayed posttests and indepth interviews with teachers and learners. D 933 Important Scientific Research and Open Questions The idea of using computer games in education is a topic that has been discussed at length over the past 3 decades. A number of theoretical treatises and anecdotal findings praising ECGs exist in the field of game-based learning, but the number of sound empirical studies is relatively limited, and many of them are plugged with methodological problems (see e.g., Hays 2005; Gredler 2004; Klopfer 2008; Sisler and Brom 2008). The empirical findings can perhaps be best summarized as follows: the games are usually at least as effective as other instructional methods in terms of cognitive learning outcomes, and they are sometimes more motivating, but they can be also detrimental to learning if they are used in the wrong way. Because development of an ECG is a costly process, it is of paramount importance to identify which features, if any, can make ECGs more effective than cheaper instructional methods. To a large degree, potential problems with ECGs can be tackled by a careful ECG design, preparation of additional learning materials and supplementary activities, and meaningful support for teachers. Some unresolved questions follow: 1. What kind of supplementary activities should take place in relation to the game? How should these activities be interconnected to the game? 2. How to promote the ▶ transfer of skills learned via the game to the real world context? 3. Integration of an ECG into the formal schooling system is sometimes problematic. What technologies and methods make games most easily usable in schools? 4. Development of computer graphics is expensive. Is there a trade-off between the level of graphic expressiveness and learning outcomes? What is the “minimal amount of graphics” ensuring acceptable learning outcomes? 5. In which contexts are competitive principles an advantage and in which contexts are they detrimental? 6. Do the answers to the abovementioned questions differ for different age groups and by gender? These are not new questions: some of them had already been asked three decades ago (e.g., Malone 1981). However, the answers to most of the questions D 934 D Designing Lessons are still limited. Yet some points are commonly agreed upon, most notably: 1. The game’s story should be meaningful to the target audience. Elements enhancing the motivation of the students (e.g., an appropriate level of challenge and control) should be included in the game. 2. The game’s rules should be easily understandable for the target audience. Oftentimes, the cognitive load induced by the game has to be reduced in comparison to mainstream computer games. 3. The goals set out in the game should be known to the teachers and often also to the students (compare this with ▶ experiential learning) and should be visibly relevant to the educational objectives. 4. The game’s tasks should be gradually challenging enough (to avoid boredom) but not overly hard (to avoid frustration). 5. The game should not be a stand-alone activity. Guidance has to be provided to both students and teachers and the game has to be integrated with other educational activities. Salen, K., & Zimmerman, E. (2005). Game design and meaningful play. In J. Raessens & J. Goldstein (Eds.), Handbook of computer game studies (pp. 59–79). Cambridge, MA: MIT Press. Sisler, V., & Brom, C. (2008). Designing an educational game: Case study of “Europe 2045”. In Z. Pan et al. (Eds.), Transactions of edutainment (pp. 1–16). Berlin: Springer. Designing Lessons ▶ Didactics, Didactic Models and Learning Desire ▶ Goals and Goalsetting: Prevention and Treatment of Depression ▶ Motivational Variables in Learning Cross-References ▶ Computer-Based Learning Tools ▶ Design of Learning Environments ▶ Drill and Practice in Learning (and Beyond) ▶ Evaluation of Student Progress in Learning ▶ Experiential Learning Theory ▶ Games-Based Learning ▶ Play and its Role in Learning ▶ Play, Exploration, and Learning ▶ Simulation-Based Learning in Microworlds ▶ Transfer of Learning ▶ Virtual Reality Learning Environments Desuggestopedia KAZ HAGIWARA School of Languages and Linguistics, Griffith University, Nathan, QLD, Australia Synonyms Desuggestopedia; Reserves capacity communicative; Reservopedia References Definition Gredler, M. E. (2004). Games and simulations and their relationships to learning. In D. H. Jonassen (Ed.), Handbook of research on educational communications and technology (pp. 571–581). Mahwah: Lawrence Erlbaum. Hays, R. T. (2005). The effectiveness of instructional games: A literature review and discussion. Technical Report 2005-004. Naval Air Warfare Center Training Systems Division. Klopfer, E. (2008). Augmented learning: Research and design of mobile educational games. Cambridge, MA: MIT Press. Malone, T. W. (1981). Toward a theory of intrinsically motivating instruction. Cognitive Science, 4, 333–369. Reif, F. (2008). Applying cognitive science to education. Thinking and learning in scientific and other complex domains. Cambridge, MA: MIT Press. Suggestopedia is a teaching method that was developed in Bulgaria from the 1960s to the 1990s. The method was derived from Suggestology, a medical study of suggestion in human communications and its role in the development of personality. Suggestopedia was developed with the aim of establishing creative and highly efficient learning in accordance with the natural learning style of the brain. In doing so, it attempts to liberate the learners from limiting social norms that have cumulatively been created in their personalities by experiencing negative suggestions in their social life. A very careful approach Desuggestopedia is taken to eliminate potential causes of harmful hypnosis. Also, mental hygiene and brain health are well considered in the method. The method requires a teacher and a learning group. It does not work in a self-learning environment. Theoretical Background Theoretical development of the method was made at the National Institute of Suggestology led by Georgi Lozanov (1926–), medical doctor, professor in psychiatry, psychotherapy, and brain physiology at University of Sofia. Also, Evelina Gateva (1939–1997), Ph.D. in education, significantly contributed to the establishment of the current form of the method. The method was declared final in 1994. At an early stage of development, the method was known to the world as an efficient memorization technique with which one can recognize the meanings of 1,000 foreign words in one session. Later, it was developed into more practical forms that are applicable not only to language teaching, but also to the general curriculum in schools. The effectiveness of the method was tested and reported to UNESCO in 1978, and UNESCO also formed its own working party in 1979 to examine the method and produced a final report in 1980 that recommended that the method be widely applied to areas such as elimination of illiteracy. The method’s success inspired other countries to develop their own “accelerated learning” methods. “Suggestopedia” is a combination of Latin origin word “suggestion” and Greek origin word “pedagogy.” Lozanov has called this method by three other names to distinguish his method from other similar methods. “Desuggestopedia” has been used since 1994 to indicate that this method puts an importance on “desuggestive suggestion.” He has also called it “Reserves Capacity Communicative (Re-Ca-Co)” since 1998 to indicate the method is conscious about the potential of “reserves of mind” and accesses its capacity through communication. Later in 2006, Lozanov started to call it “Reservopedia” to imply that his method is a part of the yet to be established science of “Reservology,” the study of “reserves of mind.” Suggestology In developing Suggestopedia, Lozanov attempted to apply knowledge of Suggestology to the method of education. Suggestology consists of (1) the research to D discover the nature of suggestion and (2) the construction of a brain function model as a receptor/processor of the suggestion. Lozanov’s aim was to create a method with which the brain can work most healthily and efficiently with the help of properties of suggestion. As a by-product of psychotherapy, mental hygiene was always an issue for Suggestopedia. In Suggestopedia, the promotion of mental hygiene did not contradict the promotion of brain function efficiency (Lozanov 1978, p. 222). Nature of Suggestion Following is a short summary of a part of the outcomes of Suggestology about the nature of suggestion that is applied in Suggestopedia. 1. A personality, consciously or unconsciously, constantly and holistically influences, and is influenced by, general suggestions. Such suggestions can create a belief in social norms through the experience of following the social orders and the “commonsense.” Such beliefs in social norms can limit or inhibit the person’s ability. 2. Many phenomena that used to be thought of as unique to those people who are in hypnosis, or master of ascetical training, or with psychic disorder, or natural genius, can appear in ordinary people by being exposed to general suggestions. Such phenomena are hypermnesia and a control over the autonomic functions of the body. 3. Some phenomena that never appear in a hypnotized person can appear in an ordinary person with a particular type of general suggestions. Such phenomena are promotion of creativity, inhibition of suggestibility, lowering of the risk of being hypnotized, and minimizing or removing the influences of social suggestive norms. Suggestology calls such types of suggestions “desuggestive suggestions.” 4. The mental state of a person is always changing from time to time. Also, different types of personality can coexist in a person. The forms of reaction to the suggestive stimuli are different and unique to each person. 5. Every person has a mental protection system to prevent the unwanted influences of suggestions. The protection system consists of three “suggestive barriers”: (1) logical (or reasoning) barrier that eliminates unreasonable suggestions to maintain 935 D 936 D Desuggestopedia logical consistency, (2) ethical barrier that eliminates immorality by checking against personal morals, and (3) affective barrier that intuitively stops information to maintain emotional stability. The states and heights of these barriers are diverse and unique to each person. They are also constantly changing their forms over time. 6. Nevertheless, there are general tendencies in suggestibility: (1) Suggestions from more prestigious sources are more acceptable, (2) suggestions given in more trustful human relationships are more acceptable, and (3) suggestions given in less aggressive and less defensive communication are more acceptable. Brain Model in Suggestology Following are some of the characteristics of the brain/ mental model in Suggestology: 1. The brain inherently desires to learn and feels happiness when it learns. 2. It is natural for the brain to memorize all information at once regardless whether the information is given to the central or the peripheral area of consciousness. It is also natural that the acquired information is stored in the brain for a long time in near-perfect form. In this respect, the general term “good memory” does simply mean a strong ability to “recall.” 3. Mental activities in the brain occur on two conceptual planes: (1) conscious plane and (2) paraconscious plane. Both planes coexist in parallel and constantly exchange information each other. Conscious mental activity requires support from a large mass of information stored in the paraconscious area. Brain activities on the paraconscious level are more or less automatic, emotional, and unlimited. When the brain is demanded to do intensive conscious mental activities without sufficient reserves in the paraconscious area, the brain becomes frustrated. 4. Information given to a part of the brain is immediately shared by other parts of the brain. It is impossible to stop the information from spreading around the brain. Therefore, for example, the brain is not good at separating logic from emotion. It is good at association. 5. In general, to some extent, the brain likes changes and surprises. It does not like mechanical repetitions or very predictable consequences. At the same time, the brain likes a safe and consistent environment. 6. The brain tends to create multiple personalities. Many personalities appear from time to time in many aspects of the life of a normal healthy person. 7. The brain as an information processor has an integrated structure of holographic and hierarchic functions. In such a structure, while each element of the brain can represent the whole brain system, it processes certain types of information in certain ways on demand of the integrated core personality. In accordance with the above-mentioned brain model, Suggestopedia established a unique teachinglearning style. First it creates a great reserve of information in the learner’s “paraconscious” area and then it activates the acquired information in a cheerful environment where the learners enjoy games, songs, music, arts, changes, surprises, and their different selves. The teacher guarantees the learners with a safe, protective, and encouraging environment that allows them to make mistakes, express themselves, and even hide behind others. An integrated dynamic balance of emotion, logic, part, and the whole is well maintained throughout the course. Elimination of Hypnosis Suggestopedia tends to be believed as a method that uses the state of hypnosis, and its techniques and activities are made so that they induce hypnosis. However, these are misunderstandings. Rather, one central mission of Suggestopedia from the beginning of its development is to eliminate all possible elements that induce hypnosis from all educational activities (Lozanov 1978, p. 269). Lozanov, a hypnotist himself, strongly opposes using hypnosis in nonclinical practices. He shares apprehensions with other hypnosis researchers such as Weitzenhoffer (1989) that hypnosis not only prevents a person from being spontaneous, but can also cause serious mental and physiological disorders even after wakening. According to Lozanov, the state of hypnosis is harmful for the personality because: 1. A hypnotized person cannot make a decision. 2. A hypnotized person cannot think critically. 3. A hypnotized person cannot show creativity. Desuggestopedia 4. Once hypnotized, the same person can be more easily hypnotized again. 5. Nonclinical hypnosis can include contaminated suggestions that can later cause a serious psychological and physiological disorder in the person. For the reasons mentioned above, a genuine Suggestopedia course NEVER includes such activities (Lozanov 2009) as: D between. One chapter of the course book usually takes 4–5 days. The duration of the course, the cycle, and the activity parts can be flexibly decided according to the nature of the learning group. An orientation is held prior to the course. In the orientation, learners are advised not to worry if they don’t understand the teacher’s talk in the target language, but just to enjoy what is happening. Learners are also advised that no study at home is necessary. ● Instruction or training for learners to get a state of Introduction relaxation ● Visualization or meditation ● Breathing exercises ● Use of biofeedback and similar techniques that aim ● ● ● ● to obtain a specific brain wave Use of commanding and ordering expressions Simple repetition in the task Slow and monotonous music Repetitive use of recorded voices Lozanov believes that such activities can induce the learners into a harmful state of hypnosis. Lozanov also points out the possibility that teachers in conventional education can, without knowing it, induce their students into hypnosis. Course Structure A Suggestopedia course has a basic activity cycle of (1) “introduction,” (2) “concert sessions,” (3) “elaboration,” and (4) “summary.” The cycle is repeated from one chapter to another, but care is taken not just to copy and repeat the cycle so that the whole course forms a dynamic, artistic, up-streaming spiral. All elements in the cycle are carefully allocated to follow the theory and the core concept of the method. Approximate time durations of each part in the course structure are as follows: First day only Rest of the course Introduction: 30–45 min Active concert session: 60 min Passive concert session: 30 min Elaboration, summary, introduction: 630 min (14  45 min) Active concert session: 60 min Passive concert session: 30 min A typical day of an intensive course consists of two 90-min sessions with a 30 min break inserted in The first day introduction is a prelude to the whole course. It can be considered as a kind of stage performance in which the teacher as a performing artist involves all learners in his/her communication in order to quickly soak them in the target language world (Hagiwara 1993). In this session, the course is given the key atmosphere in which the learners do not have to worry about making mistakes, and their creativity is always welcomed. The introduction also gives learners the direction, the goal, and the reason to learn the language in implicit, nonverbal, or “desuggestive” ways. Taking the opportunity to make a strong impression during the first encounter of the course, some important elements in the target language are introduced to the learners. Introductions to other chapters are given either within the “elaboration” or as an independent session depending on the structure of each course book. Concert Sessions In the “concert reading session (or concert session),” the teacher reads the textbook with the selected background music (Lozanov and Gateva 1988) in order to expose the learners to a mass volume of target language information. The session consists of two readings, “active session” and “passive session,” and the same part is read through in each reading. Whereas music pieces with relatively high dynamism are selected for the “active” readings, cheerful, lively but less dynamic pieces are selected for the “passive” readings. In the “active reading,” the teacher reads the passage dynamically and slowly so that the intonation harmonizes with the rhythm and melody of the music pieces that are selected from symphonies and concertos in the Classical to Romantic period. In the “passive reading,” the teacher reads the passage with normal speed and intonation that is used in daily life with the music 937 D 938 D Desuggestopedia selected from the Baroque period. In this way, the “active reading” can associate words with their meaning. And the “passive reading” can associate words with the target language’s phonetic system. The background music is used to salvage learners from drowsiness and to stimulate their creativity. Also, the music is expected to have an off-focusing effect that turns the learner’s attention away from the amount of vocabulary so that the teacher can send a large volume of language information to the learner’s paraconscious area without creating anxiety, a mental block of “suggestive barrier” in the learner’s mind. The main purpose of the “concert session” is to create a mass reserve of information that is to become a firm foundation for the learner to later process the new language. Therefore, not all the linguistic information given in the “concert session” is the teaching target. However, in Suggestopedia, the reserves given in the “concert session” play a significant role in terms of promoting health in brain functions. Usually more than 800 unique words (lexicon) are exposed to the learner in the first day “concert session.” learning content. This activity is often called “summary.” The summary is often incorporated into the elaboration part. The teacher asks learners to do the summary task. However, the task is never enforced, and the teacher can wait until the learner is ready. Important Scientific Research and Open Questions The “elaboration” starts on the following day of the concert sessions. “Elaboration” is a series of sessions in which learners read through the chapter that has been read in the prior concert sessions. The learners are taught “four macro language skills” in the various activities in “elaboration.” Superficially, it looks similar to an ordinary communicative language class that gives learners such tasks as oral practice, reading comprehension, grammar introduction, grammar task, games, songs, role play, and story-telling. However, in the Suggestopedia, the concept employed by the teacher when preparing classroom tasks is very different from conventional language instructions. The teacher of Suggestopedia prepares the classroom so that the plenty and diverse information can reach learner’s “paraconscious area” through “peripheral perceptions.” The teacher considers the use of focusing/off-focusing techniques so that the central learning target will not unnecessarily catch the learner’s attention. Elaboration often includes the introduction to the next chapter. Most basic research for Suggestology was conducted in the 1960s and the 1970s. In particular, intensive experimental research has stopped since 1980 when the National Research Institute for Suggestology was closed as a result of political changes in communist Bulgaria. Suggestology now needs reinterpretation within the latest terms of such fields of science as modern neurology, brain science, cognitive science, and the study of semiosis. For example, such key concepts as “reserves of mind,” “paraconscious,” and “desuggestive suggestion” may need to be explained in the framework of modern science with respect to Lozanov’s definition. Suggestopedia as a philosophy of education can be applied to other teaching/learning methods. Suggestopedia is a complete system and each part and elements are not separately applicable to other methods. However, it is possible for other methods to apply the principle of Suggestopedia that is, for example, the use of “desuggestive suggestions” to maintain learner’s motivation to the subject, or keeping learner’s mental hygiene by eliminating hypnotic elements from the classroom and so on. Such application of the Suggestopedic principle can be explained in the context of the modern terminology of education, such as “authenticity,” “learned optimism,” “collaborative learning,” and “student-centered learning.” As recommended in the 1980 UNESCO report, Suggestopedia needs the establishment of a comprehensive system to train its teachers and maintain teaching quality. A concrete testing system is most desirable. The testing system should at least be able to measure proficiency attainment and learning speed. For example, language proficiency rating systems such as OPI (US ACTFUL), ISLPR (Australia), and CEFR (EU) can be explored in association with the principles of Suggestopedia. Summary Cross-References At the end of the chapter, learners are encouraged to try a task in which they spontaneously use the chapter’s ▶ Authenticity in Learning Activities and Settings ▶ Cognitive and Affective Learning Strategies Elaboration Deutero-learning ▶ Cognitive Learning ▶ Collaborative Learning ▶ Developmental Cognitive Neuroscience and Learning ▶ Music Therapy ▶ Neuropsychology of Learning ▶ Semiotics and Learning ▶ Student-Centered Learning ▶ Superlearning References Hagiwara, K. (1993). An invitation to suggestopedia. The Language Teacher, 17, 7–12. Lozanov, G. (1978). Suggestology and outlines of suggestopedy. New York: Gordon and Breach Science Publishers. Lozanov, G. (2009). Suggestopedia/Reservopedia theory and practice of the liberating-stimulating pedagogy on the level of the hidden reserves of the human mind. Sofia: Sofia University Publishing House. Lozanov, G., & Gateva, E. (1988). The foreign language teacher’s suggestopedic manual. New York: Gordon and Breach Science Publishers. Weitzenhoffer, A. M. (1989). The practice of hypnotism, Volume I and II. New York: Wiley. Further Reading UNESCO (1980). Suggestopedia, Expert Working Group on Suggestology and Suggestopedy. 1980. Rapport final. Vol. 2006, UNESCO. http://unesdoc.unesco.org/ulis/cgi-bin/ulis.pl?database=unesbib&text_p=phrase+words&text=Suggestopedia&ti_p=inc&ti=&au=& ca_p=inc&ca=&kw_p=inc&kw=&la=&me_p=inc&me=&da_p= %3D&da=&se_p=inc&se_p=inc&se=&ib=&no=&dt=&tie=AND. Accessed 06 May 2011. Detour Problems Problems in which the subject’s goal is to travel from point A to point B and a direct route cannot be used. Deutero-learning WOLFRAM LUTTERER Department of Education, University of Freiburg, Freiburg, Germany Synonyms Context of learning; Learning II; Learning to learn; Set learning; Transfer of learning D 939 Definition The word deutero comes from the Greek word deuteros, which means second, next, or farther from. The term deutero-learning was coined in 1942 by the angloamerican anthropologist Gregory Bateson (1904– 1980). Bateson distinguishes between two levels of learning, proto- and deutero-learning. These levels of learning are simultaneous. The term deutero-learning describes the context in which (proto-)learning processes occur. You “learn” not only what you are supposed to learn (in a common sense understanding); so, for example, riding a bike, learning a language, or repairing a car – these processes are all proto-learning. At the same time you are learning this, you are also learning something about the world and something about how things occur. You develop habits. This is, at least partly, a result of deutero-learning. In a later and more sophisticated version of his learning theory, Bateson distinguishes five levels of learning, Learning 0 to Learning IV. Deutero-learning here is regarded as Learning II, and proto-learning as Learning I. Learning II is formally defined as “change in the process of Learning I, e.g., a corrective change in the set of alternatives from which choice is made, or it is a change in how the sequence of experience is punctuated” (Bateson, Steps to an Ecology of Mind, 1972/2000, p. 293). Bateson abstains from offering a concluding list of all possible aspects of Learning II oder deuterolearning. He is content with describing several aspects of this level: So adjectives describing character are an example for Learning II. Behaviors as anxious, passive, bold or careful are viewed as, at least partly, acquired by learning. Learning II also describes the punctuation of human interaction: Nobody is passive or anything else in a vacuum. Behavior is part of social interaction and it is an acquired way of punctuating the “stream of events,” i.e., the interpretation of events as causes or actions or whatever else. Finally, much of this Learning II dates from early infancy and it is unconscious. There are two common misinterpretations of this theory: 1. Sometimes deutero-learning is viewed as somehow higher in terms of better than proto-learning. But this is a misunderstanding. Deutero-learning is in context to proto-learning. They both are the complementarily parts of the human process of learning. D 940 D Deutero-learning 2. Sometimes it is assumed that deutero-learing is chronologically following to proto-learning, as if consecutive. But this is also not correct. Both aspects of learning are synchroneous. Maybe these misinterpretations are the result of the fact that this theory is not strictly elaborated in all its aspects. Theoretical Background Bateson’s learning theory was successively developed over a period of almost three decades and a series of several articles. The term deutero-learning was coined in 1942, but it was abandoned in the further development of the theory because of the later differentiation of five levels of learning instead of the initial two. However, deutero-learning or Learning II remains also in the later development of the theory as the core of the concept (Visser 2003). The explicit theoretical background of the original article consists in an application of behavioristic psychological theory, mixing this with gestalt theory and the anthropological thinking of Margaret Mead. Bateson builds his argument especially on Clark Hull’s Mathematico-Deductive Theory of Rote Learning (1940). It is shown with several learning curves that the rate of learning increases in successive experiments. This brings Bateson to a first step in his theory: the learning to learn: In a series of similar learning experiments, the learning will get faster. Explicitly discussed are classical Pavlovian contexts, contexts of instrumental reward or escape, of instrumental avoidance, and of serial and rote learning. Bateson uses this all for the exploration of the question, what is learned about the world in a context created by repetitive experiences – for example, in the Pavlovian way. So a “pure” Pavlovian would probably live with a quite fatalistic worldview. All events would be as if preordained and he would probably assume not being able to influence the course of events (cp. Bateson 2000, p. 173). This is the context where he puts his concept of deutero-learning in. Further evidence is shown by the discussion of the apparently different typical character of Balinese people. Shortly, “habit,” “behavior,” the individual “worldview” and the expectation how things are done are all, at least partly, results of deutero-learning. The first draft of the theory was strongly motivated by war time and questions of nation building. Bateson’s question is if it really will be leading to the desired result of supporting democratic ideals if people are only manipulated to behave like they are supposed to behave, instead of strengthening democratic ideals also in learning contexts. Shortly, you cannot condition a person to be a wholehearted supporter of democratic (or other) ideals. The context of learning should be governed by the ideas structuring society. In contrast to that time behaviorism Bateson clarifies the significance of insight. Bateson plays furthermore a certain role in early constructivism and in cybernetics (Lipset 1982). The later stage of his theory is largely influenced by the type theory of Bertrand Russell (1903). With type theory, Bateson creates the rigorous and formally elaborated differentiation of the learning levels 0–IV. Important Scientific Research and Open Questions The differentiation of proto- and deutero-learning is a key concept for several other theoretical approaches. In Bateson’s work itself, it is essential for his communication theory (1951) and the double-bind theory (1956), but also for his later work on ecological thinking. Bateson’s theories are the core of the Pragmatics of Human Communication by Paul Watzlawick (1967), and also of Neurolinguistig Programming, coined by John Grinder and Richard Bandler (The Structure of Magic, 1975). Deutero-learning is also the key to the Frame-analysis of Erving Goffmann (1974) and to the concept of habitus by Pierre Bourdieu (La distinction, 1979). It is also used in the organizational theory of Chris Argyris in his idea of a double-loop learning (Argyris/Schön, Theory in Practice, 1974), and to psychotherapy (Gron 1993). There are several open questions. Batesons learning theory remains somehow isolated in the modern research. This may be result of the fact that the experimental examination of deutero-learning effects is difficult. But, on the other hand, it is possible that this lack in reception inhibits modern learning theory from a further insight in the structure, development, and complexity of human worldviews, leaving them on a formal, still quite simple, level of exploration. Developing Cross-cultural Competence Furthermore, it remains as a problem, that deuterolearning or Learning II seems to be just a collection of several aspects or contexts of learning. This is theoretically not satisfying. It is unclear if and how further research can give more insight or if this is a fundamental problem, which puts the whole theoretical approach in question. Finally, Bateson abstains from an analysis of the possible relation of his levels of learning to each other. So it remains unclear how Learning 0 and Learning II relate to each other. Further development of some aspects of Bateson’s learning theory is given by Lutterer (2011), reformulating several aspects of this theory, especially the relation of the several learning aspects to each other and to the crucial significance of Learning 0 for the whole learning process. But all things considered, there remain open questions and a task for further research on Bateson’s learning theory and its significance for a better understanding on the development and the structure of human thinking. Cross-References ▶ Anthropology of Learning and Cognition ▶ Bateson, Gregory (1904–1980): Anthropology of Learning ▶ Learning Set Formation and Conceptualization ▶ Social Construction of Learning References Bateson, G. (2000): Steps to an Ecology of Mind. Chicago, University of Chicago Press. (therein esp.: “Social Planning and the Concept of Deutero-Learning” and “The Logical Categories of Learning and Communication”). Gron, P. (1993). Freedom and determinism in Gregory Bateson’s theory of logical levels of learning: An application to psychotherapy. Boston: Boston University Press. Lipset, D. (1982). Gregory Bateson: The Legacy of a Scientist. Boston: Beacon. Lutterer, W. (2011). Der Prozess des Lernens: Eine Synthese der Lerntheorien von Jean Piaget und Gregory Bateson. Velbrück Wissenschaft: Weilerswist. Visser, M. (2003). Gregory Bateson on deutero-learning and double bind: A brief conceptual history. Journal of the History of the Behavioral Sciences, 39(3), 269–278. D 941 Developing Cross-cultural Competence TATIANA STEFANENKO1, ALEKSANDRA KUPAVSKAYA2 1 Department of Social Psychology, Moscow State University, Moscow, Russia 2 LITE College, London, UK Synonyms Cultural mentoring; Intercultural sensitivity; Multicultural education Definition Developing cross-cultural competence is a process which involves methods and procedures dedicated to evolving cross-cultural competence that leads to the adoption and understanding of the features of one’s own culture, develops a positive attitude to other cultural groups and its participants, and increases the ability to understand and interact with them. To a large extent, all theories and models of cross-cultural competence rely extensively on four core components: motivation (emotional attitude toward another culture, the needs of the participants of intercultural communication, social norms, self-images, openness to new information, the ability to control emotions, etc.), knowledge (cultural self-awareness, deep cultural knowledge, sociolinguistic awareness, thoughtfulness expectations, perceptions of more than one point of view, which could occur during cross-cultural contact, knowledge of alternative interpretations and knowledge of cultural similarities and differences, understanding situations and behavior caused by specific rules, etc.), personality traits (cultural empathy, emotional intelligence, etc.), and skills (tolerance to ambiguity and uncertainty, adaptability in communication, flexibility in dealing with new cultural situations and creating new categories, etc.) (Spitzberg and Changnon 2009). A broader look at the phenomena and at the practical research shows that crosscultural effectiveness is also strongly influenced by some external conditions (job/academic motivation, family support and level of adjustment, cultural distance). D 942 D Developing Cross-cultural Competence Theoretical Background High interactivity and technological advancement of the modern world stimulates rapid and efficient interaction between cultures. Moreover, multicultural society is based on the idea that many people in today’s world do not belong strictly to one ethnic group; they are members of two or more communities, holders of several cultures that are “overlapping” in various combinations, which is fundamentally mobile. Successful socialization in such a world must focus on the child’s discovery of its complexity and involves capability of independent choice (up to a change of identity), increasing the variability of behavior in different cultural environments and the level of tolerance for “different” and “unlike” through the development of cross-cultural competence. Crosscultural competence is the sum of knowledge about one’s own and other cultures, which can be seen through attitudes and behavior in order to ensure effective and appropriate interaction in a variety of cultural aspects. Cross-cultural competence begins to develop from 2 to 4 years with the first flashes of identity with the child’s ethnic group and differentiation from other groups, i.e., from the moment when the child begins to speak. The upper age limit of developing of crosscultural competence is hardly definable, because it can be described as a self-editing process, which reflects an awareness of one’s own culture, being capable to assess the cultural position of others and interact efficiently in different cultures. The first and most famous attempt of conceptualizing cross-cultural competence as a developmental phenomenon was made by M. Bennett (Bennett 1993). According to his concept, cross-cultural competence is directly determined and might be measured by ▶ intercultural sensitivity which demands attention to the subjective experience of the learner. The key to such sensitivity and related skills in intercultural communication is the way in which learners construe cultural differences. There are six stages. Earlier stages of the continuum define the denial of difference, the evaluative defense against difference, and the universalistic position of minimization of difference. Later stages define the acceptance of difference, adaptation to difference, and the integration of difference into one’s world view. This model is highly influential in training and research. The other significant developmental model adapts the concept of culture shock (Ward et al. 2001) to a stage model of cultural adjustment. The model suggests that people living abroad or spending substantial time in a different culture are going through the following stages over time: honeymoon stage, hostility stage, humorous stage, in-sync stage, ambivalence stage, reentry culture shock, and resocialization stages. Initially the model was proposed as a U-curve hypothesis by S. Lysgaard, later expanded by J. R. Gullahorn and J. E. Gullahorn into a W-curve model, including three final stages of repatriation. Even though the model is still popular due to its simple visualization, results of studies have proved rather ambivalent. In modeling the programs of developing crosscultural competence certain factors need to be taken into consideration, that were accumulated in the model of intensity factors in intercultural experiences by Paige (Paige and Goode 2009): cultural differences (the degree to which the differences between a person’s own and another culture are directly proportional to that person’s psychological stress), ethnocentrism (level of it from both – the person and the new culture itself), cultural immersion (the more immersed the person is in another culture, the greater the amount of stress), cultural isolation (an opportunity to have contact with one’s own culture decreases stress), language (the more essential language ability is for functioning in the particular culture, the greater will be stress of the experience), prior intercultural experience (gives an opportunity to obtain an idea of intercultural communication and adaptation), expectations (awareness is essential, as positive and unrealistic expectations about the new culture and high expectation of individuals lead to psychological stress), visibility and invisibility (regarding physical or any other identity), status (an ability to identify markers and gain a desirable level in the new culture), and power and control (being placed in a different culture an individual feels a loss of power and control over events in comparison with what they possessed at home). Important Scientific Research and Open Questions Recent research shows that intercultural learning is significantly enhanced when it is facilitated. The second set of studies, however, indicates that this type of Developing Cross-cultural Competence cultural mentoring is uneven at best and sometimes even nonexistent (Paige and Goode 2009). The lack of a unique theory of cross-cultural competence and a coherent methodological framework, and eclecticism in the models and practical concepts of successful intercultural communication show the importance and need for conceptual development. There is certainly significant room for further study into methodological issues of researching and methods of developing cross-cultural competence. Among the questions of components, its structure and relations, one of the key questions in conceptualization of cross-cultural competence development is the relationship between communication competence and cross-cultural competence itself. There are different opinions on whether cultural differences and their influence on the competence need to be studied first or if communication competence within a specific culture is the core issue. According to B. Spitzberg, it is necessary to establish a culture-independent model of competence, which then could be adapted to different cultures. However, according to Y. Kim cross-cultural competence is not simply a communicative competence in a cultural context; it is above all a comprehensive personal ability to cope with the effects of individuals belonging to different cultural groups and the stress associated with it, so intercultural competence cannot be defined as competence adapted to different cultures (Stefanenko and Kupavskaya 2010). The lack of experimentally proven answers regarding the correlation between communication and crosscultural competence is well balanced by intensive development of the practical approach which started in the mid-1970s and continues up to the present day. Most approaches to developing cross-cultural competence can be classified on the basis of three major dimensions: (a) the degree to which the method is experiential versus didactic (the didactic model based on the assumption that cultural understanding comes with knowledge of history, traditions, and customs; the experiential model is based on the assumption that most people derive knowledge from personal – direct or simulated – experience); (b) the extent to which it is culture-general (etic) or culture-specific (emic) (the culture-specific approach allows people to understand how to interact with representatives of a particular culture; the culture-general approach leads to the realization of the basic psychological phenomena D (negative stereotypes, prejudice, etc.) that interfere with harmonious intercultural communication); and (c) the field, in which key results need to be achieved – cognitive, emotional or behavioral (the cognitive approach focuses on getting information about cultures and cultural differences; the emotional approach focuses on the transformation of attitudes, related to intercultural interaction, feelings toward the “other”; behavioral approach is designed to generate skills that will enhance the efficiency of communication). The main didactic types of programs for developing cross-cultural competence are education (the acquisition of knowledge about a new culture mainly by means of lectures and discussions, reading appropriate literature and watching movies about general or culture-specific aspects of the culture), orientation (providing a broad look at potential problems or focuses on particular aspects of adapting to a new environment), and briefing (a quick introduction to new cultural environments, the basic norms, values, and beliefs of the other group). Despite its popularity and common usage, there are strong disadvantages of didactic methods in developing cross-cultural competence because they (a) assume a passive position of the student; (b) suggest working with predefined critical incidents and their solutions, whereas one of the main skills of cross-cultural competence is an ability to be able to identify the problem first and a flexibility in solving it; and (c) orient to an impartial investigation and information analysis, although in practice the skills for charged emotional interactions with people are far more important (Bhawuk and Brislin 2000). A simple transfer of knowledge about cultural and ethnic diversity of the world is not enough, and it is important for the person to empirically discover similarities and differences from others, which means that experiential methods – such as cross-cultural training – have higher potential in developing cross-cultural competence. Four main stages could be identified in the process of developing cross-cultural competence by the means of cross-cultural training. Firstly, awareness of cultural specificity of human behavior in general (cultural awareness). Secondly, awareness of specific features characteristic of the native culture (self – awareness) as only then the gap between personal values, beliefs, behavior, and those of others could be 943 D 944 D Developing Team Schemas explored. The main abstract categories, which describe the variety of different cultures, are verbal and nonverbal behavior, communication, cognitive and learning styles, values, interaction rituals, conflict styles, and identity development. Thirdly, awareness of the importance of cultural factors in the process of intercultural interaction (cross-cultural awareness). Fourthly, in accordance with cross-cultural training practice, the development of superordinate (the term and concept was suggested and described in the realistic conflict theory by M. Sherif) and personality development in accordance to them. Development and Learning (Overview Article) ANDREY I. PODOLSKIY Department of Developmental Psychology, Moscow State University, Moscow, Russia Synonyms Growth; Maturation and learning; Ontogenesis Definition Cross-References ▶ Competence-Based Learning ▶ Cross-cultural Factors in Learning and Motivation ▶ Cross-cultural Learning Styles ▶ Cross-cultural Studies on Learning and Motivation ▶ Cross-cultural Training ▶ Learning and Training: Activity Approach References Bennett, M. J. (1993). Towards ethnorelativism: A developmental model of intercultural sensitivity. In R. M. Paige (Ed.), Education for the intercultural experience (pp. 21–71). Yarmouth: Intercultural. Bhawuk, D. P. S., & Brislin, R. W. (2000). Cross-cultural training: A review. Applied Psychology: An International Review, 49(1), 162–191. Paige, R. M., & Goode, M. L. (2009). Intercultural competence in international education administration. In D. K. Deardorff (Ed.), The SAGE handbook of intercultural competence (pp. 333–349). Thousand Oaks: Sage. Spitzberg, B. H., & Changnon, G. (2009). Conceptualizing intercultural competence. In D. K. Deardorff (Ed.), The SAGE handbook of intercultural competence (pp. 2–52). Thousand Oaks: Sage. Stefanenko, T. G., & Kupavskaya, A. S. (2010). Ethno-cultural competence as a component of competence in communication. In Yu. P. Zinchenko & V. F. Petrenko (Eds.), Psychology in Russia: State of the art. Scientific yearbook (pp. 550–564). Moscow: Lomonosov Moscow State University/Russian Psychological Society. Ward, C., Bochner, S., & Furnham, A. (2001). The psychology of culture shock. London: Routledge. Developing Team Schemas ▶ Development of Team Schemas Development and learning are considered as formally, historically, and conceptually different terms which, however, have a lot in common with regard to their psychological content. Development describes the growth of humans and other animals throughout the lifespan. This includes all aspects of human growth, including physical, emotional, intellectual, social, perceptual, and personality development. The scientific study of human development seeks to understand and explain how and why people change throughout life. In contrast, learning is NOT the result of biological maturation and growth or of temporary effects of internal and external factors. Often development and learning are used to mutually exclude the each other. Theoretical Background One of the interesting tendencies in contemporary psychology at the end of the twentieth century was a reconsideration of the nature of interaction between learning and developmental processes during childhood and along the broader life span. On the one hand, it is hardly difficult to note the serious differences between learning and developmental studies: They differ in their experimental and theoretical goals, methods and techniques, preferences in the interpretations they choose, lists of “key persons,” etc. On the other hand, more and more scholars have been attracting the attention of the scientific community to a necessity to go into depth while comparing the conceptual content of the terms under consideration as well as general methodology and methodological practices in the study of learning and development. One of the central questions under discussion is the following: What do we mean when we speak about development and learning? Are we really speaking about different psychological realities or Development and Learning (Overview Article) are we only paying our dues to the old prejudgment? Perret-Clermont expressed this dilemma in the following way: “Is there a difference between learning and development? What is that which is learned? Also what is that what cannot be learned but which, nevertheless, structures (in an internal or external way) the organization of subjects’ responses at different ages? Can intellectual development be considered as essentially of the sum of acquired knowledge, or, is it, on the contrary, partially (or even totally) independent of that acquired knowledge?” (Perret-Clermont 1993, p. 198). According to Siegler (2000), the background of these claims lies in researchers’ natural reaction to the longstanding dominance of Piaget’s assumption that development and learning are fundamentally different processes. “Piaget frequently distinguished between development, by which he meant the active construction of knowledge, and learning, by which he meant the passive formation of associations. Active developmental processes were of interest; passive learning processes were not. This distinction was valuable in focusing attention on children’s efforts to make sense of the world and in exposing hidden assumptions that had shaped previous research on children’s learning” (Siegler 2000, p. 26). It is also important that the process of transition from stage to stage, the crucial point of the Piaget’s theory of child development, is necessarily a slow one because it depends so strongly on the child’s own activity. Piaget also assumed that the process cannot easily be modified by external interventions such as instruction. According to Perret-Clermont, “psychologists have often used the term development to refer to endogenous transformations that reveal biological maturation or a supposed comparable psychological maturation . . . Development has been understood to mean qualitative changes in thinking that would not result directly from accumulation of information but would signal the emergence of new stages or new structures – that is, more powerful equilibration and self-regulation processes. In contrast, learning in a strict sense or acquisition of knowledge usually means those processes that pertain to the reception of information” (PerretClermont 1993, p. 198). As a result of such an understanding, learning is considered to be a central part of children’s lives. Nevertheless, the study of learning is a rather peripheral part of the field of cognitive development (Siegler 2000). D Since the last third of the twentieth century, a brand new trend has appeared in Western European and North American developmental and learning psychology. Developmental psychologists “began to articulate a view of conceptual development that was less monolithic than Piaget has proposed. Theorists began to assert that children’ conceptual development was less dependent on the emergence of general logical structures than Piaget has suggested and more dependent on the acquisition of insights or skills that are domain, task, and context specific. They also began to assert that children’s thought is more responsive to external influence than Piaget has suggested and more dependent on the sort of social interaction that has been described by Vygotsky” (Case 1996, p. 2). These psychologists cited the following findings in support of this new view of conceptual development: (a) insignificant correlations between developmental tests that Piaget had claimed tapped the same underlying general structure, (b) substantial asynchrony in the rate of development of concepts that Piaget had claimed was dependent on the same underlying structure, (c) the significant effect of short-term training on logical tasks such as conservation, and (d) the transfer of such training to other tasks with the same conceptual content but not to other tasks whose conceptual content was different but which were supposed to depend on the same underlying operational structure. In turn, researchers involved in experimental learning studies began to pay more attention to the acquisition of concepts and skills that are important in children’s lives. Despite differences in these researchers’ theoretical orientations (neo-Piagetian, cultural contextualist, dynamic systems, and information processing), modern investigations of children’s learning have yielded strikingly similar results. According to Siegler, such similarities are especially encouraging because they suggest that the regularities in children’s learning are so strong that they remain evident despite differences in investigators’ preconceptions and the specifics of tasks, content domains, and populations. One should not forget that the strict demands of concrete practice have spawned the development of a much more enriched understanding of the nature of human learning in applied areas of the science of learning like instructional design and instructional technology (see the entries in this volume). Modern cognitively based instructional design models and theories utilize 945 D 946 D Development and Learning (Overview Article) concepts such as “self-directed learning,” “student selfregulation,” “meaningful content,” “cognitive and metacognitive strategies,” “mental models,” etc. It is not difficult to see how far removed a model of learning conceptualized in this way is from a simplistic, nonrepresentational conception of learning which has little relevance today but which formed the contrast to a conception of development. “Modern research has made it clear that learning processes share all of the complexity, organization, structure, and internal dynamics once attributed exclusively to development. If the distinction has become blurred, it is not because development has been reduced to ‘nothing but’ learning, but rather because we now recognize learning to be more like development in many fundamental respects” (Кuhn 1995, p. 138). According to Siegler, “the terms ‘learning’ and ‘development’ are used differently, with development referring to changes that are more universal within the species, that occur over longer time periods, and that occur in response to a broader variety of experiences. At the level of process, however, the two have a great deal in common” (Siegler 2000, p. 32). One more issue which highlights the interrelation between development and learning has to do with the methodological paradigms researchers use to explore these processes. Perret-Clermont believes that the difference between learning and development might then appear to depend not so much on the subject as on the relative position of the observer and the observed. “When a researcher wishes to study learning, his or her subject is an individual who is placed in a situation that induces learning. Conversely, when he or she wishes to observe supposed endogenous development, the investigator usually adopts a less interventionist position, where the subject can be considered without being explicitly provided with particular cognitive objectives to be achieved. The subject, therefore, does not feel induced to learn but feels observed without having the observer’s intention clearly defined for him or her” (Perret-Clermont 1993, pp. 199–200). However in the reality of learning as well as development, “we are concerned with processes having an omnipresent interindividual dimension” (Perret-Clermont 1993, p.199). Perret-Clermont believes that the differences in the paradigms used to collect data create the differences between data collected as learning and data pertaining to development. “But, in fact, from a sociocognitive point of view, in both cases the phenomena under study are the same: How does a subject behave when facing tasks and expectations? Of course, the tasks and the social agents differ, depending on the context (e.g., school, laboratory, family, street, playing field), but in each case the subject has to solve problems, maintain relationships, save face, send messages, understand questions, try to provide answers, achieve goals, and manage emotive reactions and desires” (Perret-Clermont 1993, p. 200). One may distinguish two main sources of findings and insights on this issue: first, the many thoughtprovoking publications by Western European and North American neo-Piagetians and post-Piagetians which appeared from the 1980s to the 2000s, several of which were reviewed above, and second, theoretical and empirical studies devoted explicitly to the issue of the connection between learning and development based on Vygotsky’s cultural-historical approach. In formulating his own position, Vygotsky rejects three established theoretical positions on the relationship between learning and development: (1) the maturational view, which suggests that a child’s individual level of mental development constitutes a tightly restrictive prerequisite or precondition for efficient learning (e.g., Binet and Piaget’s advocacy of a learning approach which feared “premature instruction”); (2) the view that learning and mental development are synonymous (as in James’ account of education as the mere acquisition of habits of conduct or behavioral tendencies); (3) the view which considered mind as a network of generalized rather than specific capabilities (as in Koffka’s Gestalt psychology approach or the “Classics” training tradition, in which the mind was assumed to be a homogeneous muscle that when exercised in a given domain of knowledge would produce imminently transferable learning elsewhere) (Vygotsky 1978). It follows that neither the full independence nor the partitioning of these constructs brings clarification of their interrelation. The crucial step was the introduction of the concept of the ▶ zone of proximal development (see also the entry in this volume). Vygotsky argued that it was necessary to clarify two issues in order to reach a more adequate view of the relation between learning and development: first, the general relation between learning and development; and second, the specific features of this relationship when children reach school age. The concept of the zone of proximal development, that is., the Development and Learning (Overview Article) distance between a child’s actual developmental level as determined by independent problem solving and the level of its potential development as determined by problem solving under adult guidance or in collaboration with more capable peers, is considered by Vygotsky to be crucial for the consideration of both issues. Important Scientific Research and Open Questions Many studies conducted by both Russian and Western scholars in the Vygotskian tradition consider interaction between the child and adult as well as between children in a collaborative learning activity which leads to development. However, they do not touch upon the specific mechanisms by means of which learning affects cognitive development, nor do they answer the question of how such mechanisms might be taken into account in designing teaching and instruction. “The exact ways in which instruction participates in development remain to a large extent unclear. For example, by what means learning within the zone of proximal development propels children’s development and what mechanisms underlie the child’s transition to a qualitatively advanced cognitive functioning still remain open questions” (Arievitch and Stetsenko 2000, p. 71). As Russian psychologist P. Galperin has shown repeatedly in many publications, the main reason why it was not possible to operationalize Vygotsky’s highly heuristic statements was a lack of knowledge about a complete system of psychological conditions that underlie the process of learning and enable mental actions, images, and representations to form with the desired and prescribed outcomes. According to Galperin, mental action is a functional structure that is formed continually throughout an individual’s lifetime. Using mental actions and mental images and representations, a human being plans, regulates, and controls his/her performances by means of socially established patterns, standards, and evaluations. Mental action can and should be considered as the result of a complex, multimodal transformation of initially external processes performed by means of certain tools (Galperin 1989). Galperin therefore defined a system of conditions that guarantee the achievement of prescribed, desired properties of action and image. This system is termed the “system of planned, stage-bystage formation of mental actions” or the PSFMA system for short. D The PSFMA system includes four subsystems: (1) the conditions that ensure adequate motivation for the subject to master the action; (2) the conditions that provide the formation of the necessary orientation base of action; (3) the conditions that support the consecutive transformations of the intermediate forms of action (materialized, verbal) and the final, endtransformation into the mental plan; and, (4) the conditions for cultivating, or “refining through practice,” the desired properties of an action. Each subsystem contains a detailed description of related psychological conditions, which include the motivation and operational areas of human activity (see also the ▶ Internalization and ▶ Mental activities of learning entries in this volume). Since the late 1950s, a significant number of authors (both researchers and practitioners) have tried to use Galperin’s theory to improve schooling processes and results. Studies concerned the very different types of schools (primary, secondary, vocational, special schools). The subjects (learners) were ordinary, disabled, and gifted children of different ages (from 5 to 18). The specific domains were also very different: writing and arithmetic, native and foreign languages, math, the sciences and the humanities, drawing, music, physical education. Finally, psychologically heterogeneous structures were the objects of planned stage-by-stage formation: separate domain-specific mental actions with connected concepts and representations, groups and systems of actions and concepts, and actions which underlie cognitive as well as metacognitive strategies and heuristics. As has been convincingly demonstrated by hundreds of experimental and applied studies, it was possible to fulfill the whole set of main objectives that any schooling is aimed at: (a) the guaranteed acquisition of the educational course by practically all of the learners is ensured (provided, of course, that they have the necessary level of preliminary knowledge and skills) without prolonging (or sometimes – even reducing) the time allocated to it, and practically without any additional costs; (b) the division between acquisition of knowledge and its application is minimized or even disappears; (c) the learners acquire the ability to transfer knowledge and skills to a new situation: not only knowledge and skills themselves, but also the manner of acquiring them; (d) the learners get more and more interested in the very processes of acquiring knowledge and in knowledge itself. Certainly, Galperin’s approach to development has raised and in fact continues to raise 947 D 948 D Development and Learning (Overview Article) a lot of problems both of theoretical and practical origin; however, this entry is not the place to analyze them. In addition to the important results achieved in his empirical research, Galperin distinguished three basic types of learning which are characterized by various levels of developmental potential. The main characteristics of the first type of learning consist in the subjects’ lack of orientation in the essential characteristics and conditions of the problem situation to be solved. Galperin believes that despite many differences in past and contemporary teaching methods used worldwide, they all share one basic similarity: incompleteness of the child’s orientation. Arievitch and Stetsenko present Galperin’s position as follows: “Galperin maintained that the striking similarity of Vygotsky’s and Piaget’s conclusion (despite all the differences in their views on cognitive development) that children come to be able to form and use genuine concepts only at the age of 10–12 years can be explained not by some inherent regularities of the child’s mind and its development. Rather, in Galperin’s view, the similar developmental trajectory in the child’s conceptual development described by both Piaget and Vygotsky can be attributed to the profound similarity of instruction that both scientists indirectly dealt with when studying children’s development in preschool and school settings and modeled in their experimental work . . . What makes the great many various methods of teaching-learning just different versions of basically the same type of instruction? Galperin argued that such a unifying feature is that most instructional methods fail to provide the child with all the necessary tools and conditions for correct orientation in the task and therefore, for correct performance” (Arievitch and Stetsenko 2000, p. 75). In the second type of learning the child is provided with all of the necessary conditions (see above) to acquire, internalize, and appropriate an action. The child is provided with the complete system of psychological conditions, a newly formed action undergoes a set of transformations and finally gets internalized, thus becoming a part of the child’s mind (Galperin 1982). The area of application of the newly formed action is limited to a defined subject domain. In the third type of learning students acquire a general method of constructing a concrete orientation basis to solve any specific problem in a given subject domain. A general method of this kind involves a theoretical analysis of objects, phenomena, or events in various subject domains. In such analyses, students learn to distinguish essential characteristics of different objects and phenomena, to form theoretical concepts on this basis, and use them in subsequent problem solving. More specifically, such analysis includes (a) discriminating between different properties of the object or phenomenon, (b) establishing the basic unit of analysis of a particular property, and (c) revealing to the child the general rules (common to all objects in the studied area) of how those units are combined to form concrete phenomena. Learning of this type always takes the form of active exploration of the subject under the guidance of a teacher. Instruction based on the third type of learning was applied by Galperin and his collaborators in a number of experimental programs aimed at teaching children a variety of different subjects, such as mathematics, physics, native and foreign languages, history, etc. One of the most effective of these programs for childhood cognitive development was that designed for teaching 5 to 6-year-old children elementary mathematics, specifically basic mathematical concepts (Galperin 1982). Within this program children were taught fundamental concepts like those of numbers and units as well as other related concepts and arithmetic operations. The general result of the program (as presented by Arievitch and Stetsenko) was the formation of genuine mathematical concepts in children a whole age period earlier than usual – in 6-year-olds rather than in 10–12-year-olds. Even more importantly, however, was that the children’s entire view of things had changed: The children came to understand that things cannot be judged by their visual properties alone. This characteristic of the preschooler’s thinking underlies the child’s spectacular display of nonconservation in Piagetian tasks. In the case described, immediate judgment by visual characteristics was replaced by an analytical procedure in which children learned to discriminate between different properties of objects and transform a given property into quantities by using certain measures. Consequently, the children gained insight into the implicit structure of objects, where each basic property of an object constitutes a separate quantity and an object itself (as a whole) is represented as a constellation of different quantities. Thus, the children advanced from immediate (naiveegocentric) thinking to thinking mediated by measure and measurement and thereby set themselves free from Development and Learning (Overview Article) the domination of perceptual impression. It came as no surprise then that in follow-up experiments, the children who were initially identified as consistent nonconservers in Piagetian tasks (they saw no problem in immediately concluding that the whole object changed, e.g., became “smaller,” when just one of its properties, e.g., height, was transformed) refused to give an immediate answer to a conservation task after having mastered the idea of measurement and the concept of number according to principles of systemic-theoretical instruction. Instead, they would say: “First let’s measure!” (Galperin 1982). As a result, the Piagetian phenomena of nonconservation completely disappeared in those children and the concept of conservation emerged, although this concept had not been taught to the children by direct means. Arievitch and Stetsenko comment that from Galperin’s analysis it follows that it is unproductive to discuss the role of learning and instruction in childhood cognitive development without referring to the specific type of instruction that is actually applied in teaching children: Depending on the type of learning, its role may be substantially different. Thus, within the first type of learning and corresponding type of instruction, the evidence of positive effects of learning on development remains ambiguous and disputable (Arievitch and Stetsenko 2000). Learning in traditional instruction occurs, as a rule, through gradual and mostly unsystematic (through trial-and-error) selection of successful versions of problem solving with little transfer or generalization of knowledge and with a heavy emphasis on rote memorization. The results of learning then are largely a matter of individual effort and luck. All this indeed leaves researchers and educators with little choice other than to conclude that the role of instruction is very limited in cognitive development. However, this appears to be an overgeneralization based on the specifics of just one (traditional) type of learning and instruction. In the third type of learning, the character of knowledge itself (genuinely theoretical) and the way it is presented to the child (in conceptually based analysis) differs radically from the two other types of learning distinguished by Galperin. This method provides children with qualitatively new tools (means of mathematical, linguistic, or other kinds of analysis) to deal conceptually with a wide range of objects and phenomena extending far beyond the area under immediate D study. The central property that defines the developmental potential of certain types of instruction (and therefore their specific role in development) is the quality of the cognitive tools provided to the child in the course of instruction to help it to orient itself with regard to the conditions of the task and perform it adequately. When the set of such tools is insufficient for successful performance and based on empirical concepts, the developmental potential of instruction is severely limited, and cognitive development is then contingent on vicissitudes of the child’s individual experience within and outside of a given instructional context. In contrast, when the set of cognitive tools provided to the child is complete and based on theoretical concepts, instruction results in profound developmental progress. In this latter case instruction generates cognitive development directly (Galperin 1982; Arievitch and Stetsenko 2000). Conclusion As the overview in this entry shows, contemporary scholars understand development and learning as closely connected and interrelated processes. It is always necessary to bear in mind the social origin of both fundamental processes and, accordingly, to take into account the student’s attitudes and orientations in learning environments, learning tasks, etc., in all their complexity. The relationships between development and learning are not uniform and can only be adequately understood when the particular character of learning and instruction with a definite developmental potential is taken into account. When speaking about development in the current context we mean “cognitive development,” which should be quite understandable due to the obvious priority of cognitive development among the various areas of the developing mind. However, the interrelations between learning and development also can and should be explored in such areas as moral, social, and emotional development. Another task of high priority is to continue the search for a field in which the interactions between development and learning processes are represented in a most explicit form as well as the search for a “common language” which might enable scholars to synthesize facts, evidence, and regularities reached for each of those two branches of studies separately. This task may be characterized as necessary but exceedingly 949 D 950 D Development of Expertise challenging, as it requires for entirely new conceptual and explanatory principles to be created. What could serve as adequate content for such principles, and what do we need to do to formulate them? – These questions remain unanswered. Cross-References ▶ Cultural-Historical Theory of Development ▶ History of the Sciences of Learning ▶ Internalization ▶ Learning Activity ▶ Mental Activities of Learning ▶ Zone of Proximal Development References Arievitch, I., & Stetsenko, A. (2000). The quality of cultural tools and cognitive development: Gal’perin’s perspective and its implications. Human Development, 43, 69–92. Case, R. (1996). Introduction: Reconceptualizing the nature of children’s conceptual structures and their development in middle childhood. In R. Case & Y. Okamoto (Eds.), Monographs of the society for research in child development: Vol. 61. The role of central conceptual structures in the development of children’s thought (Serial No. 246, 1–2, pp. 1–26). Chicago: University of Chicago Press. Galperin, P. (1982). Intellectual capabilities among older preschool children: On the problem of training and mental development. In W. W. Hartup (Ed.), Review of child development research (Vol. 6, pp. 526–546). Chicago: University of Chicago Press. Galperin, P. (1989). Organization of mental activity and effectiveness of learning. Journal of Soviet Psychology, 27(3), 65–82. Кuhn, D. (1995). Microgenctic study of change: What has it told us? Psychological Science, 6, 133–139. Perret-Clermont, A.-N. (1993). What is it that develops? Cognition and Instruction, 11(3/4), 197–205. Siegler, R. S. (2000). The rebirth of children’s learning. Child Development, 71(1), 26–35. Vygotsky, L. (1978). Mind in society. Cambridge, MA: Harvard University Press. Development of Expertise FERNAND GOBET Department of Psychology, Brunel University, Uxbridge, Middlesex, UK Synonyms Acquisition of expertise Definition An ▶ expert is a person whose performance in a given domain is superior to that of the large majority of the population. Recursively, a super-expert is an expert whose performance is superior to that of the large majority of the expert population. The study of the development of ▶ expertise has been the province of psychology, but has also attracted interest in other fields such as biology, education, sociology, and artificial intelligence. While theories based on ▶ talent have emphasized fixed, innate traits, theories based on learning and practice have shed light on the path that developing experts have to travel. They have examined the types of knowledge that must be acquired, and the form that their acquisition takes over time (e.g., Didierjean and Gobet 2008). Theoretical Background In 1905, Albert Einstein wrote five articles that revolutionized the world of physics. Arthur Rimbaud stopped composing poetry at the age of 21. In just 5 years of writing, he had revolutionized modern literature. In sport, Martina Hingis dominated female tennis from 1997 to 2002; she was the youngest player to be number one in the history of tennis. In chess, former world champion Garry Kasparov obtained the highest ratings of all times. His strength was such that he beat national teams, consisting of professional players, in simultaneous games. These examples illustrate super-experts – extreme cases of expertise. The label “expert” is also used, more modestly, with individuals such as physicians, PhDs, national champions in sports, and so on. The fact that the term “expert” applies to such diverse kinds of people suggests that it might be problematic to define what exactly an expert is. One could define an expert as someone who attains performances at the level of an experienced professional; but to what does “experienced” refer? Does it refer to the amount of practice devoted to a domain or even to the number of years spent in the domain? However, time spent in a domain is a poor predictor of expertise. Think, for example, about those golf amateurs who have practiced for years but have never reached a high level of play. Similarly, the use of diplomas is problematic, because diplomas are based on sociocultural criteria, which are rarely objective measures of relevant performance. For example, diplomas in medicine are a reflection more of individuals’ ability to study rather than an ability to diagnose and treat patients successfully. Development of Expertise The difficult task of classifying an expert is easier in some domains where official ratings are available. The best example is in the game of chess, which for decades has had a rating (the Elo rating) that precisely and quantitatively ranks players, from beginners to world champions. The presence of such a rating system (updated every few months) explains why so much expertise research has been made on chess. Unfortunately, such rating systems are rare, even in games and sports. To mitigate this rarity, researchers (e.g., in physics and medicine) have often used a simple dichotomy: novice vs. expert. While this is a practical solution, it loses much information. A further complication is that “expert” is a label that is sometimes given more for social reasons than for the skill level of an individual. The criteria may vary between societies or even within societies. This renders comparisons very difficult indeed. In some cases, whether somebody is an expert or not critically depends on the context: A fortuneteller is considered an expert in some societies but not in others. In some domains at least, the status of an expert is clear. A skeptic might doubt the expertise of a Roger Federer, but playing a game of tennis against him would swiftly dispel any doubts. The study of expertise is interesting and important for the sciences of learning, for several reasons. First, individuals who are capable of extraordinary performances offer a unique window on human cognition. Second, and related to the first point, these individuals can shed light on strategies to push the previously known limits of human cognition and rationality. These strategies can be useful to other people, even nonexperts. Third, studying experts can illuminate which training methods are efficient and which ones are not, again with applications for nonexperts and even for education in general. Finally, a better understanding of the cognitive processes underpinning expertise and its development may help the development of artificial intelligent systems capable of performing at a level equal or even higher than the best human experts. In psychology, two traditions have dominated the study of extraordinary performances: one based on the notion of talent, and the other based on the notion of expertise (see Table 1). The first tradition (e.g., Eysenck 1995) goes back to the nineteenth century, with Gall’s phrenology in Germany and the works of Galton in England. It aims to show that innate talent is necessary D 951 Development of Expertise. Table 1 Comparison of the two traditions that have dominated the study of exceptional performances: on the left, the talent approach; on the right, the expertise approach Extraordinary performance Talent Expertise Correlational studies Experimentation and modeling Psychology of intelligence Cognitive psychology Innate Acquired Differences between novices and experts Similarities between novices and experts Children/adults Adults Normal and pathological Normal for high levels of performance. For example, researchers such as Eysenck in England and Jensen in the USA have suggested that intelligence quotient (IQ) correlates with the efficiency of elementary perceptual processes. This approach is characterized by the use of correlations, the use of data from neurobiology, and a focus on interindividual differences. The second tradition (e.g., Ericsson et al. 2006) has focused on the learning mechanisms and the environmental conditions that make possible the development of extraordinary performances. The emphasis is on adults and “normal” individuals, as opposed to individuals suffering from pathologies (e.g., autistic calculators). Rather than correlations, this approach uses laboratory experiments where experts are asked to carry out tasks representative of their skills. For example, given the description of a case, a physician is asked to carry out a diagnosis and venture a prognosis. The experiments carried out by the expertise approach tend to use standard experimental paradigms in cognitive psychology. The focus is on the similarities between the performances of individuals of the same skill level, or even between the cognitive mechanisms used by individuals at different levels of expertise. Differences between individuals of different skill levels tend to be explained by differences in practice. Computational modeling is sometimes used to formalize the processes thought to underpin the development of expertise. In other words, a computer program is designed which simulates expert performance. D 952 D Development of Expertise The program is required to produce the same results as an actual human expert, and therefore needs to operate using the same cognitive processes as an actual human. In some cases, the computer models are able to reproduce the detail of the behavior at different levels of skill. Sadly, there have been only few interchanges between these two traditions, and the rare examples have tended to highlight the disagreements rather than the possible commonalties (see, e.g., Howe et al. 1998, article in Behavioral and Brain Sciences and the ensuing replies). Some rare authors (e.g., Mackintosh 1998) occupy an intermediate position on the continuum ranging from a pure hereditarian position to a pure environmentalist position. Given its own research interests, the expertise tradition has had much more to say about the development of extraordinary performances than the talent approach, which emphasizes the role of traits that are essentially fixed. Starting with Binet in 1894, expertise development has been seen as the acquisition of knowledge. De Groot in 1946 described in more details the ways expert knowledge is organized, and also had the critical insight that knowledge is closely linked to perception. In 1964, Fitts proposed three stages in the acquisition of perceptual and motor skills. During the cognitive phase, rules, procedures, and facts are learned by instruction, trials and errors, and feedback. During the associative phase, stimuli are associated with responses, and chains of responses are built. During the autonomous phase, behavior becomes self-sufficient and independent of cognitive control. Practice and feedback play an essential role in the last two phases. Building on the work of De Groot, Simon and Chase (1973) proposed that knowledge is encoded as small ▶ chunks, fairly simple data structures. They also suggested that the development of knowledge consists of acquiring a large number (between 10,000 and 100,000) of these chunks. Some of the chunks are linked to potentially useful actions. Gobet and Simon in 1996 combined the idea of a chunk with that of a schema, proposing that the chunks that are used often by an expert become templates, more complex data structures akin to a schema, which consist both of fixed and variable information. Ericsson and colleagues have pushed to the extreme the position of the expertise approach, arguing that ▶ deliberate practice is sufficient to attain high levels of expertise. In line with Chase and Simon’s account, Newell and Rosenbloom proposed in 1981 that expertise is the product of the acquisition of a large number of ▶ productions – simple rules consisting of a condition and an action. Combining computational and mathematical modeling, they showed that, following the principle of diminishing returns, accretion of chunks leads to a power law of learning in performance; that is, there are rapid improvements at the beginning, followed by increasingly slower improvements thereafter. Important Scientific Research and Open Questions In spite of considerable research in the last decades, the field of expertise faces a number of open questions and challenges. As noted above, defining expertise has turned out to be tricky, and, in many domains, a better operationalization of this concept is desirable. The interaction between environment and the cognitive processes engaged by experts is rarely well understood. At the extreme, it could be argued that studying experts will tell us little about the cognitive mechanisms underpinning expertise, but much about the domain itself – the structure of the environment. For example, will observing biologists reveal anything about their thinking beyond what could be found in a biology textbook? Obviously, students of expertise believe this is the case, but stronger evidence than that collected so far would be welcome. While far from being resolved, the debate between talent and practice, one the many variations of the great debate between innate and acquired, has gained in momentum with the rapid advances in neuroscience and genomics in the last decades. The results seem to bring support and comfort for both camps. On the talent side, the developments in genomics make it hard to doubt the role of genes in individual differences, including differences between the best individuals in a domain. On the expertise/practice side, the developments in neuroscience have highlighted the plasticity of the brain and its remarkable ability to learn. Given the complexity of these results, together with the complexity of the results collected in expertise research itself, it is likely that the only way forward is to use some form of computational modeling. Finally, one of the great unknowns in this field is whether there are stages in the development of expertise, Development of Expertise and High Performance in Content-Area Learning as proposed for example by the early work of Bryan and Harter in 1899 on telegraphy. The presence of power laws seems to suggest that expertise development is continuous, but stages keep appearing in theories of expertise. Cross-References ▶ Bounded Rationality and Learning ▶ Chunking Mechanisms and Learning ▶ Development of Expertise ▶ Individual Differences in Learning ▶ Learning in Practice ▶ Learning in the CHREST Cognitive Architecture ▶ Schema D Definition Expertise denotes the reproducible superior performance of an individual in a particular professional domain, e.g., medicine, music, and physics. The development of expertise is based on many years of dedicated and directed content-area learning. This kind of learning is described as deliberate practice. Usually oldtimers guide newcomers in their deliberate practice, e.g., teachers and trainers. Practice leads to substantial adaptations to the task constraints of the domain. Most prominent are cognitive adaptations like memory improvement or knowledge acquisition, but physiological and perceptual-motor adaptations occur in many domains as well. During the development of expertise, subjects play an increasingly central role in professional networks. References Didierjean, A., & Gobet, F. (2008). Sherlock Holmes – An expert’s view of expertise. British Journal of Psychology, 99, 109–125. Ericsson, K. A., Charness, N., Feltovich, P. J., & Hoffman, R. R. (2006). The Cambridge handbook of expertise and expert performance. New York: Cambridge University Press. Eysenck, H. J. (1995). Genius: The natural history of creativity. New York: Cambridge University Press. Howe, M. J. A., Davidson, J. W., & Sloboda, J. A. (1998). Innate talents: Reality or myth? The Behavioral and Brain Sciences, 21, 399–442. Mackintosh, N. (1998). IQ and human intelligence. Oxford: Oxford University Press. Simon, H. A., & Chase, W. G. (1973). Skill in chess. American Scientist, 61, 393–403. Development of Expertise and High Performance in ContentArea Learning HANS GRUBER1, HELEN JOSSBERGER2 1 Institute of Educational Science, University of Regensburg, Regensburg, Germany 2 Center for Learning Sciences and Technologies (CELSTEC), Open Universiteit, Heerlen, The Netherlands Synonyms Acquisition of expertise; Professional learning; Professional performance 953 Theoretical Background Research about the development of expertise is a significantly expanding area of interest in psychology, educational science, sociology, economy, and many other sciences (Ericsson et al. 2006). Understanding the nature of high performance is of relevance in many contexts in daily and professional life. Scientific analyses focus on (1) characteristics of experts and (2) learning processes and practice patterns during the development of expertise. Experts are persons who, by objective standards and over time, show reproducible superior performance in typical activities of a complex domain, i.e., a professional content area like medicine or physics. Experts in any complex domain have intensively practiced for 10 years or more (10-year rule of necessary preparation), and thus have extensive knowledge at their disposal. Even the most “talented” individuals need such a period of preparation to attain outstanding performance and most experts practice considerably longer. Expertise is extremely domain specific and it cannot be easily transferred to a different domain. It has been shown that experts do not possess outstanding domain-independent competences. For example, excellence in memorizing new information is one of the most striking characteristics of experts. However, experts’ memory is not superior to novices’ in general. De Groot’s (1965) seminal work on chess masters pioneered the scientific orientation of research on expertise. The most striking difference between D 954 D Development of Expertise and High Performance in Content-Area Learning grandmasters and weaker players was revealed in a memory task, in which subjects were presented chess positions for a few seconds and asked to immediately reconstruct them. The experts’ superior recall was explained with specific perceptual structures they held in memory, that were closely related to their domain-specific knowledge. De Groot’s interpretation of the findings directed the future research on expertise from the perspective of information processing theory and cognitive psychology. The focus on the analysis of cognitive adaptations during the development of expertise (perception, memory, knowledge, problem solving) has since been maintained and also influenced the empirical research methods. Most studies on expertise use the contrastive approach, in which experts are compared with novices; sometimes subjects with an intermediate level of expertise (semi-experts) are investigated, too. Important Scientific Research and Open Questions Memory and Knowledge Early studies on expertise were characterized by a focus on cognitive adaptations like knowledge and memory. Based on the work by De Groot (1965), it was found that experts remember domain-specific information faster and more effectively as well as recall domainspecific information more accurately than novices. This was the case in many different domains. The experts’ superior recall is explained by specific perceptual structures they hold in memory, which are closely related to their domain-specific knowledge. In addition, experts are able to store information faster in their long-term memory and therefore can retrieve and apply new information more quickly. The concept of pattern recognition (Chase and Simon 1973) explains experts’ ability to very quickly recognize relevant patterns. It is based on the integration of perception and knowledge through practice. Components of successful pattern recognition are the mechanisms of chunking and skilled memory. The mechanism proposed in the chunking concept is the integration of small knowledge units into larger chunks that are labeled with indices that are subsequently used during recall instead of separate knowledge units. Experts’ superior memory performance is closely related to their knowledge. Experts excel by a great amount of knowledge and an advantageous knowledge organization in order to make functional and efficient use of their knowledge. However, expert knowledge is much more than factual or declarative domain knowledge. Though experts have much domain knowledge available, they excel in flexibility using this knowledge, modifying it or restraining from applying it within certain contextual circumstances. Many different types and qualities of knowledge can be differentiated, each of them with distinguished functionality. Most important for the acquisition of expertise is the distinction between declarative knowledge (knowing-what) and proceduralized knowledge (knowing-how). Proceduralization of knowledge is one of the basic mechanisms for explaining the acquisition of expertise. Acting and Problem Solving Many expert actions are highly automated. With growing expertise, knowledge increasingly becomes proceduralized. In his ACT∗ model, Anderson (1982) stated that skill acquisition mainly consists of changing declarative knowledge through practice into proceduralized knowledge. This is achieved through knowledge compilation and rule tuning. Learners first have to acquire much declarative knowledge, which is later proceduralized and associated with action sequences. Then the skill is automatized and tuned through repeated practice. But experts are able to do more: The above-mentioned concept of adaptive expertise refers to an increasing flexibility of actions based on a growing and refined knowledge base. Obviously, transformations in the nature of knowledge during practice are at the core of the development of expertise. The most illustrating example of such transformations is the process of knowledge encapsulation, which was described by Boshuizen and Schmidt (1992) in the medical domain. Through professional activity and experience with patients, declarative knowledge about diseases is transformed into illness scripts. These are generalized knowledge structures, which are based on episodic experiences with real cases and therefore are most useful in daily work. The close relation between declarative knowledge and case information allows a quick reaction in diagnosing future cases, because these tend to be rather similar to each other. Only in “emergency” situations, retrieval of declarative knowledge is effortful and explicit. Development of Expertise and High Performance in Content-Area Learning Knowledge transformation processes during the development of expertise are neither arbitrary processes nor do they occur just as a function of time of professional work. They require deliberate actions, reflection in and about practice, and most probably meta-cognitive awareness. Deliberate Practice and Case-Based Learning Retrospective analyses of learning processes of high performers in many different domains revealed the same recurring pattern: Experts differed from other individuals early in their career, because they practiced more efficiently, had more committed teachers, and showed higher achievement demands. Ericsson et al. (1993) found that experts were more involved in effortful training activities over a long period of time that solely had the purpose of improving performance. They proposed to call such activities “deliberate practice.” Deliberate practice is effortful and not necessarily enjoyable and therefore individuals must be extremely motivated to persist training. Most people prefer activities that are motivated by inherent enjoyment (play) or external rewards (work). Therefore, expert teachers are important to support and stimulate deliberate practice by offering explicit teaching goals, feedback and opportunities for gradual improvement through repetition and correction of errors. There is convincing evidence that deliberate practice is crucial to acquire the relevant knowledge and tricks-of-the-trade in any complex content area. Deliberate practice focuses – usually with support of a trainer or skilled teacher – on the identified areas of “incompetence,” on designed step-by-step practice units and on monitoring the degree of improvement. Teachers or trainers play a crucial role as they function not only as domain experts, but also as teaching experts. As a domain expert, the teacher provides knowledge about typical requirements of the domain. As a teaching expert, the teacher functions as a personified accumulation of knowledge about appropriate teaching methods for domain-specific contents and for the development of skills. There is converging evidence that educational support of deliberate practice in complex content areas is closely related to case-based learning in (near-to) authentic learning environments. The encapsulation models linked the transformation of professional D knowledge with the experience gained while being engaged in case-based reasoning. As learning by experience with cases transforms knowledge structures, the acquisition of expertise can instructionally be supported. A possibility to foster reflective application of knowledge is the presentation of complex learning environments in which real application situations occur. Case-based learning stresses the similarity between learning situation and application situation and implies a number of advantages: (a) By dealing with complex initial case problems, learners get a notion of the relevance of the learning matter. (b) Authenticity and situativity of cases enable learners to make experiences in complex episodes of learning. (c) Multiple perspectives on the same subject matter help to avoid oversimplifications and to enhance the transferability of the to-be-learned. Open Questions The outlined characteristics of the development of expertise and high performance in content-area learning suggest three lines of research: (1) understanding the nature of individual excellence and the learning processes that lead to such excellence; (2) understanding the historical development of the respective domain, in particular the development of the social and cultural contexts in which expert performance is embedded; and (3) understanding the longitudinal development of individual expertise, including both the experts’ cognitive development and their changing role within professional communities. So far, most research has focused on the first topic. In particular, cognitive characteristics of individual excellence (knowledge, memory, problem solving, reflection, knowledge transformation, practicing) have been studied intensively. Research on the roles of social and cultural contexts has mainly addressed the role of trainers, coaches, teachers, etc., for the guidance of deliberate practice; however, other social and cultural influences have been hardly investigated. There are some interesting developments in sociocultural approaches to the analysis of expertise, but there are still many unresolved research problems. As the acquisition of expertise is based on deliberate practice and long-term training, socialization (i.e., social influence through which a person acquires the culture or subculture of his/her group) can be expected to have a strong influence on expertise 955 D 956 D Development of Judgment development. Hakkarainen et al. (2004) studied “networked expertise” activities from the perspective of a community pursuing a certain activity. More analyses of this kind have to be designed in the future in order to make the processes of social negotiation during the development of expertise more transparent. The development of expertise requires a considerable period of time. Observations of professional careers in many different domains led to the conclusion that at least 10 years of deliberate practice are required in order to fully develop expertise. It is far from trivial to design longitudinal research that reliably investigates the development of expertise. It may be expected that extensive longitudinal studies would reveal many new components of the development of expertise in addition to the cognitive structures and processes which have been the main focus in cross-sectional research. The motivational question, why some individuals are willing to work and deliberately practice for many years, whereas many others are not, is one of the big puzzles to be resolved. Cross-References ▶ Content-Area Learning ▶ Deliberate Practice and Its Role in Expertise Development ▶ Development of Expertise ▶ Expertise ▶ Long-Term Expertise Development in Complex Domains and Individual Differences References Anderson, J. R. (1982). Acquisition of cognitive skill. Psychological Review, 89, 369–406. Boshuizen, H. P. A., & Schmidt, H. G. (1992). On the role of biomedical knowledge in clinical reasoning by experts, intermediates and novices. Cognitive Science, 16, 153–184. Chase, W. G., & Simon, H. A. (1973). Perception in chess. Cognitive Psychology, 4, 55–81. De Groot, A. D. (1965). Thought and choice and chess. The Hague: Mouton. Ericsson, K. A., Krampe, R. T., & Tesch-Römer, C. (1993). The role of deliberate practice in the acquisition of expert performance. Psychological Review, 100, 363–406. Ericsson, K. A., Charness, N., Feltovich, P. J., & Hoffman, R. R. (Eds.). (2006). Handbook on expertise and expert performance. Cambridge: Cambridge University Press. Hakkarainen, K., Palonen, T., Paavola, S., & Lehtinen, E. (2004). Communities of networked expertise: Educational and professional perspectives. Amsterdam: Elsevier. Development of Judgment ▶ Aesthetic Learning Development of Musical Experience ▶ Developmental Psychology of Music Development of Selfconsciousness ANNA STRASSER Humboldt-Universität zu Berlin, Center of Integrative Life Sciences, Berlin, Germany Synonyms Core self; Egological self; Minimal self; Narrative self; Pre-reflexive self-consciousness; Reflexive self-consciousness; Self-knowledge; Self-recognition; Selfreference Definition Self-consciousness is a highly debated notion in philosophical discussions. As such, there is no general definition to be found. The same is consequently true for the development of self-consciousness; positions vary from claims of innateness up to positions maintaining that the development of selfconsciousnesses is dependent on the development of higher cognitive abilities. Self-consciousness is a special type of consciousness – whether there can be consciousness without selfconsciousness is naturally still under debate. Many forms of self-consciousness can be differentiated: A major distinction can be made between fully developed forms and antecedent, not fully developed forms of self-consciousness. The former is referred to as reflexive self-consciousness or narrative self. The latter is often called pre-reflexive self-consciousness or minimal self. Such not fully developed forms can have the Development of Self-consciousness functional role to clarify developmental questions and, more intensively, in philosophy to posit claims about logical necessities. Instead of a definition, an example shall give insight about what is meant by the notion of selfconsciousness: A person who is able to ascribe a mental state to herself is self-conscious, e.g., that one is planning to read this whole entry. This act of self-ascription implies several abilities and special knowledge. First of all, one has to have the ability to differentiate between oneself and others (▶ self-other/world differentiation). Further, one needs the ability to refer to oneself as oneself consciously (▶ conscious self-reference). This means that the person has to know that this statement is about the person she is and not about anyone else. Further, ascribing mental states to oneself implies the ability to ascribe mental states to others (▶ theory of mind). If a person does not know what it means if another person is making such a statement, it is hard to explain how she would know what it means in her case. Additionally, one needs the special knowledge that one is just one and that one is continuously in time (▶ synchronic and diachronic identity). The question about the development of selfconsciousness can be answered by analyzing the necessary and sufficient conditions for the development of those capacities. Theoretical Background Any theory concerning the development of selfconsciousness includes a clarification of what self-consciousness is meant to be and what the object of this type of consciousness is. The following overview will constitute a snapshot of several important debates, and will address topics like self-knowledge, the privileged firstperson perspective, the use of the first-person pronoun, and the general ability of self-reference. Historical debates focus on the idea of something like a metaphysical substance of a self that was reflected in the well-known debate about the mind-body problem. Dualist positions – claiming there is a nonmaterial, metaphysical self – have become, nowadays, more or less a minority and consequently, metaphysical ideas are not debated that intensely any more. But still the negotiation of the existence of a metaphysical self plays an important role concerning skeptical accounts of self-consciousness. The existence D of the phenomenon of self-consciousness itself is not debated rather what self-consciousness represents, i.e., the self, can have various definitions: Marvin Minsky (1990) claims that self consists in many different agents. Daniel Dennett (1991) describes selfconsciousness as a story made of many sketches of self-interpretations, the center of this story is the “I,” which is a fiction that can be even developed by computers. Most skeptical is Thomas Metzinger (2004), according to him self-consciousness is illusory. His theory about the phenomenal self-model, which includes many representational features explains how this illusionary, phenomenal feeling to be in direct contact to oneself is developed. But, for Metzinger, there is no self there to be represented. The historical antecedent of those positions is Hume’s negotiation of the notion of a substance claiming that there is just “a together of several perceptions.” This means that the perceiver himself cannot be perceived. Many theories of self-consciousness focus on the condition of conscious self-reference. This includes the ability to refer to yourself as yourself as well as the abilities to self-ascribe properties, parts of your body, mental states, and actions. The main question is: How is the ability of conscious self-reference possible? One answer is given by the so-called “Heidelberger Schule” (Manfred Frank 1991). Claiming that a prereflexive form of self-consciousness is logically necessary to make reflexive self-consciousness possible, the statement is made that this pre-reflexive form is not analyzable and seems to be innate. In the actual debate, one can find the position that primitive forms of self-consciousness make higher forms possible; e.g., José L. Bermúdez argues that nonconceptual representations are used to explain the development of conceptual representations of oneself. Another strategy to explain how one can be conscious of one’s own mental states is given by the socalled meta-representation theories (cp. Peter Carruthers, Daniel Dennett, and David Rosenthal); these theories introduce a level of higher-order thoughts and this level is responsible for our ability to recognize oneself as oneself. Developmental questions are not treated by those theories. Analytic theories about self-consciousness tend to focus on the linguistic ability to use the first-person pronoun in the right way (cp. Donald Davidson, Gareth Evans, John Perry, Ludwig Wittgenstein). 957 D 958 D Development of Self-consciousness Concerning developmental questions those theories result in the claim that fully developed selfconsciousness can only be ascribed if linguistic abilities are developed. Important Scientific Research and Open Questions Findings of developmental psychology – just mentioning imitation behavior of newborn babies (cp. Andrew F. Meltzoff), the so-called rouge test and results of Theory of Minds studies – are used to illustrate the development of self-consciousness. Albert Newen and Kai Vogeley (2003) define, e.g., five stages of self-consciousness: phenomenal self-acquaintance, conceptual, sentential, metarepresentational, and iterative meta-representational self-consciousness. They give a clear description at what point of time in the human development those stages can be found: 1. Phenomenal self-acquaintance understood as the ability to recognize sensory states like pain is claimed to be there even before birth. 2. The next stage – conceptual self-consciousness – realized by conceptual representations and necessary for the ability to classify objects appears already in 8–12 months-old babies. 3. Sentential self-consciousness understood as the ability to categorize events or complex scenes can be found in 1–3 years-old children. 4. Meta-representational self-consciousness understood as the ability to construct mental models of other minds demands the ability to ascribe firstorder propositional attitudes like “Darja thinks that p.” This ability can be found in 2–4 years-old children. 5. So-called iterative meta-representational selfconsciousness does not develop before the 7th year; this implies the ability to ascribe second-order attitudes like “Liz believes that Glenn thinks that p.” Positions vary in many ways regarding the point of time when the first ascription of self-consciousness in the development of human beings can be made: some hold a very basic notion of self-consciousness – nearby the above-mentioned notion of pure self-aquaintance. In this case, some will ascribe self-consciousness at a very early stage, sometimes even before birth. For example, it is claimed that self-consciousness in the means of a minimal self is already there when newborn babies show imitative behavior. Other, more demanding notions of selfconsciousness maintain that higher cognitive abilities, e.g., the above-described conceptual self-consciousness, develop first. Whether higher cognitive abilities as recognizing something as something demand linguistic abilities is still under discussion. Analytic theories claim that a subject is self-conscious only if one uses the first person pronoun correctly. This precludes the ascription of self-consciousness to young children without linguistic abilities. In further debates, it is also discussed whether self-consciousness in a basic form can be found in animals (animal cognition) and whether it will be found some days in AI systems. Most important for developmental questions are theories of self-consciousness that take intersubjectivity into account. Such theories focus on the mentioned Theory of Mind abilities. One may even say that selfconsciousness is now being researched as a part of the broader study of Social Cognition (cp. Shaun Gallagher, Albert Newen & Gottfried Vosgerau, and Dan Zahavi). Here, debates concerning social interaction lead to debates about the development of self-consciousness. George Herbert Mead (1968) gave the first philosophical theory about the functional role of the other in the development of self-consciousness. Historically, ideas about the role of the others can already be found in Georg Wilhelm Friedrich Hegel, Edmund Husserl, and Martin Heidegger. In modern times, the role of other subjects plays an important role in the work of JeanPaul Sartre and Maurice Merleau-Ponty. From a more philosophical point of view, research about the development of concepts suggest the idea that being able to ascribe psychological properties to oneself demands the ability to ascribe those properties to others as well (generality claim). This claim can be seen as an analogy to the debate of Lynn Baker, Gareth Evans, and Peter F. Strawson, which shows that the ability to entertain a concept implies the ability to apply it to more than one case. That means mere selfascription is not enough. What specific kinds of interaction are needed to ascribe psychological properties to other mental agents still is an open question, which can be resolved by examining the development of theory of mind capacities. Psychiatry with many descriptions of deficits concerning self-consciousness opens up a new field Development of Team Schemas of interdisciplinary research. Cooperation with psychiatry can enrich philosophical perspectives on self-consciousness. Psychopathological descriptions of self-consciousness as deficits of the ability of selfascription offer interesting possibilities to approach questions like what can go wrong in the development of self-consciousness. Especially, the debate about schizophrenia gives several examples of how certain aspects of self-ascription can go wrong. For example, the symptom of thought insertion asks for an analysis explaining why the self-thought thoughts are not selfascribed. Another fruitful field of cooperative research concerns the deficits one can observe in autistic patients; these may shed further light on the role of social interaction for the development of full-fledged self-consciousness. Cross-References ▶ Imitation and Social Learning ▶ Mental Models ▶ Naturalistic Epistemology ▶ Philosophy of Learning ▶ Theory of Mind in Animals D Development of Team Schemas SANDRA P. MARSHALL Department of Psychology, San Diego State University, San Diego, CA, USA D Synonyms Developing team schemas Definition A team schema reflects common knowledge and procedural processes that are relevant to a given situation in which the team is working. Well-functioning teams will have a generally consistent understanding of the critical aspects of the situation and the possible outcomes that may derive from it. Team schemas are fluid and dynamic, changing constantly as members of the team make new contributions to the collective whole. The four knowledge components – identification, elaboration, planning, and execution – found in problem-solving schemas and decision-making schemas are present in team schemas as well. References Dennett, D. C. (1991). Consciousness explained. Boston: Little, Brown. Frank, M. (1991). Die Unhintergehbarkeit von Individualität: Reflexionen über Subjekt, Person u. Individuum aus Anlaß ihrer “postmodernen” Toterklärung. Frankfurt: Suhrkamp. Mead, G. H. (1968). Geist, Identität und Gesellschaft. Frankfurt a. M: Suhrkamp. Metzinger, T. (2004). Being noone. Cambridge: MIT Press. Minsky, M. (1990). Mentopolis. Stuttgart: Klett-Cotta. Newen, A., & Vogeley, K. (2003). Self-representation: searching for a neural signature of self-consciousness. Consciousness and Cognition, 12, 429–543. Development of Self-Control ▶ Volitional Learning Development of SelfRegulation ▶ Volitional Learning 959 Theoretical Background Not surprisingly, research about decision-making schemas in military settings grew into research about team decision making, with focus on how schemas are developed within a team and how to assess the effectiveness of schema-based decision making. Just as decisionmaking schemas for an individual are related to situation awareness, team schemas are linked to what is known as “shared situation awareness,” a topic of great interest in the military (Endsley 2000). This area of research is in its infancy, with few published studies thus far available. A basic question about a team schema centers on how information necessary for the schema is shared. Each individual in a team has unique experiences, which contribute to his or her own schemas. But, a team needs to have common experiences, so that each team member has in memory the relevant features that contribute to schema development. Simply having common experiences, however, is insufficient evidence of shared situation awareness or of the development of a team schema that can be used to guide future experiences. Stronger evidence comes from communications among team members. 960 D Development, Emergence, and Maturation of Memory When teams work together to solve a problem or make a decision, they invariably must communicate with each other to convey changing features of the situation. Analysis of these communications provides evidence of strong or weak schema development among the team member. For example, Marshall (2007) analyzed the communications of teams by coding each comment according to the type of schema knowledge reflected (i.e., identification, elaboration, planning, and execution). Teams who performed better required fewer communications than teams who performed poorly, suggesting that these teams did indeed share a common view whose details did not need to be discussed. Better performing teams made more comments reflecting planning and execution knowledge than did poor performing teams. Moreover, the poorly performing teams made many more irrelevant comments that were not pertinent to the task at hand. All teams made a large number of elaboration comments, indicating the importance of building and maintaining the shared mental model that teams hold. Further examination of elaboration knowledge via communication analysis shows at least five different types of statements that strengthen or modify this schema component (Marshall 2008). Conjectures represent hypotheses about situation elements and are indicators of well-developed schemas. They help other team members understand better what happens next. Updates add to the collective body of knowledge, keeping everyone informed about changes that occur in the situation. Acknowledgments are the most frequent of comments among team members and are essential to communicate receipt of information. They also imply agreement with the received information which further strengthens the team view. Queries and corrections are the final two types of elaboration knowledge and both indicate weak or poorly formed schemas. These two forms of communication are disruptive to team schema development. A query usually indicates that the speaker is uncertain or has missed part of the information he or she should already have. The normal flow of conversation is interrupted by the query. Similarly, corrections are also disruptive because they require the other team members to change information that may already have been incorporated into the group schema. Important Scientific Research and Open Questions Research on team schemas is an area of some importance in applied areas such as military decision making because these decisions are made in realworld settings by groups of individuals working together. Many basic questions remain to be answered, such as how schemas can be modified, how schemas can be developed quickly and efficiently, and how new team members acquire an existing team schema when they are integrated into a team. Cross-References ▶ Schema-Based Problem Solving ▶ Schemas and Decision Making ▶ Shared Cognition References Endsley, M. (2000). Theoretical underpinnings of situation awareness: A critical review. In M. Endsley & D. Garland (Eds.), Situation awareness analysis and measurement (pp. 3–28). Mahwah, NJ: LEA. Marshall, S. P. (2007). Measures of attention and cognitive effort in tactical decision making. In M. Cook, J. Noyes, & V. Masakowski (Eds.), Decision making in complex environments (pp. 321–332). Aldershot, Hampshire UK: Ashgate Publishing. Marshall, S. (2008). Cognitive models of tactical decision making. In Proceedings of the Second International Conference of Applied Human Factors & Ergonomics Las Vegas, July 2008. Development, Emergence, and Maturation of Memory ▶ Ontogeny of Memory and Learning Developmental and Practice Histories ▶ Expert Perceptual and Decision-Making Skills: Effects of Structured Activities and Play Developmental Cognitive Neuroscience and Learning Developmental Cognitive Neuroscience and Learning ANNA MATEJKO, DANIEL ANSARI Department of Psychology, The University of Western Ontario, London, Canada Synonyms Developmental science; Educational neuroscience; Neurocognitive development Definition Developmental cognitive neuroscience is a multidimensional and interdisciplinary field that attempts to explain how cognitive development is supported by changes in underlying brain structure and function, and how brain organization changes over developmental time (Johnson 2011). Developmental cognitive neuroscience lies at the intersection of multiple fields including brain imaging, electrophysiology, neurogenetics, computational modeling of development, and comparative research with nonhuman primates. Neuroscience provides a means by which to constrain our understanding of cognitive development and learning to biologically plausible mechanisms. Developmental cognitive neuroscience will help determine the neurobiological processes of learning and development, and the mechanisms that support changes (neuronal plasticity) in brain function and structure over time. Theoretical Background Historical Background of Developmental Cognitive Neuroscience Developmental cognitive neuroscience has emerged from an intersection between developmental psychology and neuroscience. Though both developmental psychology and neuroscience have had a protracted history, developmental cognitive neuroscience did not become a unified field until the first decade of the twenty-first century. The field of cognitive development became firmly established after Jean Piaget examined how children think and learn. Even though Hans Berger first measured “brain waves” in children in 1932 using electroencephalography (Niedermeyer 2005), D neural systems were not systematically studied until much later. In the 1980s, magnetic resonance imaging (MRI) became a useful tool to examine brain structure. Noninvasive imaging of the functioning brain became possible with MRI when the blood oxygen level dependent (BOLD) response was discovered. The BOLD signal reflects changes of blood oxygenation in the brain. Functional brain activity in a given brain region can then be inferred from differences in BOLD signal between experimental conditions. The first application of this method to study ontogenetic changes in brain function was carried out by Casey and colleagues (1995). These authors conducted the first controlled functional magnetic resonance imaging (fMRI) study with children and examined brain regions involved in verbal working memory. Since that time, research in developmental cognitive neuroscience has grown exponentially and encompasses many aspects of cognitive development including perception, language, cognitive control, number, memory, and social cognition. Developmental Cognitive Neuroscience: A Multidimensional and Interdisciplinary Field Methods Some of the primary methods used to understand learning in the developing brain are electroencephalography (EEG), magnetoencephalography (MEG), positron emission tomography (PET), near-infrared spectroscopy (NIRS), functional magnetic resonance imaging (fMRI), diffusion tensor imaging (DTI), transcranial magnetic stimulation (TMS), computational models of development, and comparative research with nonhuman primates (see Table 1 for explanations of imaging and electrophysiological measures). These methods complement each other and provide additional means by which to better understand child development. Populations Developmental cognitive neuroscience examines developmental populations from infancy to young adulthood and encompasses both typically and atypically developing populations. Both learning disabilities with and without a known genetic cause help explain the biological mechanisms underlying learning and reveal atypical trajectories of cognitive and brain development. 961 D 962 D Developmental Cognitive Neuroscience and Learning Developmental Cognitive Neuroscience and Learning. Table 1 Methods in developmental cognitive neuroscience Method What it measures How it works Advantages Disadvantages Electroencephalography (EEG) and event-related potentials (ERPs) Measures electrical activity of the brain that originates from neural firing Multiple electrodes on the scalp measure small voltage changes. EEG waves can be combined to determine average task-dependent activity across trials ● High temporal resolution ● Difficult to determine the source of the activity Magnetoencephalography Measures magnetic (MEG) fields elicited by electrical currents from neural firing Magnetic fields are measured on the scalp using sensors in a helmet-shaped apparatus ● Better spatial ● Requires a lot of resolution than EEG equipment and has an elaborate set up ● Head and brain tissue interferes little with the signal Positron emission tomography (PET) Detects how much of a particular compound is being metabolized. When the compound breaks down it emits photons of light. Brain regions that are more active will metabolize more of the compound A radioactive compound is injected into the bloodstream. Regions that are more active will emit more photons. The scanner detects the emitted photons and a map of the brain can then be reproduced ● Flexible in the types of tasks that can be administered in PET scans ● Uses radioactive chemical tracers ● Images averaged over times longer than processes likely require Near-infrared spectroscopy A form of optical (NIRS) imaging, NIRS measures small changes in the absorption, scattering, and bending of emitted light Weak light beams are emitted at the scalp. The distortion of the light beams is then measured. Just like in fMRI, changes in blood oxygenation can be measured ● Not very sensitive to motion ● No cumbersome equipment ● Good alternative to fMRI with small children ● Limited to regions close to the skull ● Limited spatial resolution Functional magnetic resonance imaging (fMRI) The blood oxygen level dependent (BOLD) response is measured using MRI, assessing blood oxygen levels in different parts of the brain ● High spatial resolution ● One subject can be scanned repeatedly ● Low temporal resolution ● Tasks limited by the space in the scanner Oxygen is transported to tissues through networks of blood vessels on oxygenated hemoglobin. When brain regions are active they call for more oxygen, increasing oxygenated hemoglobin and decreasing deoxygenated hemoglobin Developmental Cognitive Neuroscience and Learning D 963 Developmental Cognitive Neuroscience and Learning. Table 1 (continued) Method What it measures How it works Advantages Disadvantages Diffusion tensor imaging (DTI) Diffusion in white matter is more unidirectional than in gray matter due to barriers to diffusion (myelin sheath, cell membrane, etc.) An MRI can be sensitized to the movement of water. The more directional the movement of water, the higher the white matter integrity ● Determines strength of white matter connections ● Can detect subtle abnormalities in white matter ● Still a new technique and little consensus on image processing Magnetic fields induce electrical fields, which change the membrane potential that temporarily blocks neuronal activity A coil is placed over the scalp and emits magnetic pulses. The cortex below will be rendered inactive. This is akin to a temporary lesion ● Can determine whether deficits are due to the dysfunction of a region ● Can confirm lesion studies ● Poor control of the location of stimulation ● Only stimulates regions close to the surface Transcranial magnetic stimulation (TMS) For example, children with Williams Syndrome have a genetic abnormality and exhibit atypical trajectories of cognitive abilities. They have been examined to determine both genetic and environmental factors in learning, brain function, and cognition (Karmiloff-Smith 1998). Developmental disabilities such as dyslexia (reading disability) or dyscalculia (disorder of arithmetic learning) have also been studied to determine if functional or structural brain deficits are associated with learning difficulties. The study of developmental disorders is not only important for clinical and societal reasons, but may also elucidate the cognitive and neurological processes in typical development (Karmiloff-Smith 1998). Theories and Frameworks Several broad frameworks have been proposed to explain learning and development from a cognitive neuroscience perspective. Johnson (2011) outlines three major frameworks for the study of developmental cognitive neuroscience. 1. Maturational approach: According to this perspective, the emerging ability to perform a cognitive operation is associated with a brain region “coming online.” The maturational approach is closely related to nativist perspective, according to which cognitive functions are thought to be “hard wired” in the brain. Preexisting skills simply unfold over time rather than being constructed over the course of learning and development. 2. Skill learning: Skill learning of cognitive, motor, and perceptual skills in childhood is hypothesized to be identical or parallel to skill acquisition in adults. Brain regions become specialized as a result of experience; patterns of brain activation for basic skill acquisition in children would be highly similar to patterns of complex skill acquisition in adults. This suggests that there is continuity of learning and skill acquisition from childhood to adulthood (Johnson 2011). 3. Interactive specialization: This so-called neuroconstructivist framework posits that the brain becomes increasingly specialized over time due to complex interactions between genes and the environment. Critically, functional brain specialization occurs through a process of activity-dependent interactions between brain regions rather than the shaping of single brain regions. This perspective emphasizes developmental shifts in networks of brain regions underlying developing cognitive functions (Johnson 2011). Important Scientific Research and Open Questions Since the emergence of the field of developmental cognitive neuroscience, several key findings have changed the way we understand learning and development. These findings illustrate how neuroscience can constrain and further our understanding of cognitive development and learning. D 964 D Developmental Cognitive Neuroscience and Learning Brain Development The brain is highly plastic and amenable to change throughout development. Animal and human researches converge to suggest that the brain is a dynamic structure that changes over time in response to environmental inputs, such as learning. MRI made it possible to assess structural brain changes in vivo, which was previously only measurable through postmortem studies of the animal brain. Further advances in imaging technology such as DTI have also been useful in measuring how the brain becomes rewired and how the strength of connections change over developmental time. The brain is especially plastic in early childhood. Synaptogenesis refers to the process by which synaptic density increases from birth until 2–12 months of age (depending on the brain region) at which point the process of synaptic pruning begins where frequently used connections are maintained while unused connections are eliminated (Huttenlocher 2002). Synaptic density gradually decreases to adult levels by 2–3 years of age in visual processing regions and later in multisensory regions associated with higher cognitive functions such as language and cognitive control (Huttenlocher 2002). More generally, there tend to be reductions in gray matter density and simultaneous increases in cortical white matter from childhood to adulthood (Giedd et al. 1999). Indeed, the brain has a protracted period of structural development and continues to change until young adulthood. Brain areas develop at different times; visual and auditory regions develop first, whereas the frontal lobe has a more extended period of development. Impulsive behavior in childhood and adolescence is thought to be a result of an immature prefrontal cortex. Once these regions become more developed, impulsive behavior and other aspects of executive functioning improve (Johnson 2011). Changes in brain structure, such as cortical thickness, are also related to measures of intelligence; there is a negative correlation between cortical thickness and intelligence in early childhood, but a positive correlation later in development (Shaw et al. 2006). Specialization of Brain Function Neural systems tend to become increasingly specialized over time, suggesting that functional specificity of brain regions for particular cognitive functions is the outcome of a developmental trajectory. Younger children tend to show more diffuse brain activation for a particular task than both older children and adults (Casey et al. 1997). Specialized regions begin to appear early in development. Both number and face processing are examples of neural functional specificity that continue to become increasingly specialized. Converging evidence suggests that there are distinct parietal circuits that become increasingly specialized for number magnitude processing and calculation (Ansari 2008). A frontoparietal shift is evident from childhood to adulthood for number processing; activation in the frontal cortex decreases with age while the parietal cortex becomes more engaged (Ansari 2008). Face processing also becomes highly localized over the course of development. The fusiform face area (FFA) in the occipital and temporal lobes is activated for face processing, but adults tend to have a larger area of activation in FFA than children (Grill-Spector et al. 2008). By mid-childhood, the same regions of the cortex are reliably activated to faces, however, the FFA continues to become increasingly selective for faces over time (Grill-Spector et al. 2008). Grill-Spector et al. (2008) suggest that the specialization of the FFA may involve sharpening neural tuning to faces, a larger magnitude of response to faces, or a greater number of face-selective neurons in the FFA. These streams of research demonstrate localization of function and increasing specialization with development and experience. Effect of Learning and Training on Brain Structure and Function There appear to be sensitive periods of brain development where environmental stimulation is particularly important for skills to develop normally. Sensitive periods have been clearly demonstrated in the development of the visual system and for language acquisition. Determining how environmental factors and learning affect brain structure and function has important educational implications. With the availability of noninvasive imaging techniques, it is now possible to determine whether remediation programs or interventions have lasting effects on behavior and on brain structure and function. There has been some success in determining changes in brain function in atypically developing children following intervention. Children with dyslexia are characterized by reading difficulties and poor phonological awareness. They tend to show Developmental Cognitive Neuroscience and Learning reduced or absent brain activity for phonological processing in left temporo-parietal and frontal regions (Gabrieli 2009). Remedial programs demonstrate that improved readers show more typical patterns of brain activation and that they compensate for weak performance of reading systems in left hemisphere with increases in activation in the right hemisphere (Gabrieli 2009). The effects of remediation can be observed for up to a year, this highlights how the learning brain is highly plastic and can compensate for weaknesses. Neural Networks and Development Brain areas do not function in isolation, but need to communicate and interact with other regions. These interactions change over the course of learning and development (see interactive specialization hypothesis above). Thus, determining how neural networks develop is essential to understanding how the brain works and the mechanisms behind cognitive change. In a recent study, Fair et al. (2007) demonstrated how brain networks thought to subserve cognitive control functions such as inhibition and task switching develop through segregation (decreased short-range connections) and integration (increased long-range connections). This was observed through age-related changes in functional connectivity between brain regions in children, adolescents, and adults. This study illustrates how cognitive development may not only depend on changes in brain activity, but also on changes in the connectivity between regions. Future Directions Developmental cognitive neuroscience is a relatively new field, and there are still many unanswered questions. First, it is unclear whether structural and functional changes in the brain are a result of maturation, learning, or a product of their interactions. Neurocognitive development is a complex process where innumerable factors play a role. Parsing out the causes of cognitive development will be a formidable task. Second, little research has focused on training and its effect on behavioral and brain-related changes in both typically and atypically developing children. Few educational programs are solidly founded in knowledge of cognitive and brain development and determining the mechanisms of behavioral and cognitive changes will help direct the establishment and evaluation of educational programs (Johnson 2011). Third, very little is known about how brain D structure and function are related to each other. Recent advances in imaging technology will make it easier to determine the relationship between white matter, cortical thickness, gray matter density, and other measures of brain structure with corresponding functional activity. Fourth, the role of genetics in learning is still unclear. Research on children with genetic abnormalities and with twins has provided some insight into how genetics may affect learning, cognition, and brain development. However, advances in neurogenetics will likely clarify the mechanisms behind neurological changes. Fifth, the development of neural networks at both structural and functional levels of analyses is still poorly understood. Furthermore, how neural networks change as a result of learning is largely unknown. Future research will need to uncover the processes involved in the development of neural networks and how learning shapes them. Finally, developmental cognitive neuroscience is increasingly being applied to educational settings, yet there is still a substantial gap and a lack of knowledge transfer between labs and classrooms. There needs to be a better integration between educational practices and research. We now know that neural factors are important in learning and cognitive development, this knowledge will be crucial in developing targeted programs for children. Cross-References ▶ Dyscalculia in Young Children: Cognitive and Neurological Bases ▶ Neural Network Assistants for Learning ▶ Neuroeducational Approaches on Learning ▶ Neuropsychology of Learning ▶ Vulnerability for Learning Disorders References Ansari, D. (2008). Effects of development and enculturation on number representation in the brain. Nature Reviews Neuroscience, 9, 278–291. Casey, B. J., Cohen, J. D., Jezzard, P., Turner, R., Noll, D. C., Trainor, R. J., Giedd, J., Kaysen, D., Hertz-Pannier, L., & Rapoport, J. L. (1995). Activation of prefrontal cortex in children during a nonspatial working memory task with functional MRI. Neuroimage, 2, 221–229. Casey, B. J., Trainor, R. J., Orendi, J. L., Schubert, A. B., Nystrom, L. E., Giedd, J. N., Castellanos, X., Haxby, J. V., Noll, D. C., Cohen, J. D., Forman, S. D., Dahl, R. E., & Rapoport, J. L. (1997). A developmental functional MRI study of prefrontal activation during performance of a go-no-go task. Journal of Cognitive Neuroscience, 9, 835–847. 965 D 966 D Developmental Dyscalculia Fair, D. A., Dosenbach, N. U. F., Church, J. A., Cohen, A. L., Brahmbhatt, S., Miezin, F. M., Barch, D. M., Raichle, M. E., Petersen, S. E., & Schlagger, B. E. (2007). Development of distinct control networks through segregation and integration. PNAS, 104, 13507–13512. Gabrieli, J. D. E. (2009). Dyslexia: a new synergy between education and cognitive neuroscience. Science, 325, 280–283. Giedd, J. N., Blumenthal, J., Jeffries, N. O., Castellanos, F. X., Liu, H., Zikdenbos, A., Paus, T., Evans, A. C., & Rapoport, J. L. (1999). Brain development during childhood and adolescence: A longitudinal MRI study. Nature, 2, 861–863. Grill-Spector, K., Golarai, G., & Gabrieli, J. (2008). Developmental neuroimaging of the human ventral visual cortex. Trends in Cognitive Neuroscience, 12, 152–162. Huttenlocher, P. R. (2002). Neural plasticity: the effects of environment on the development of the cerebral cortex. Cambridge, MA: Harvard University Press. Johnson, M. H. (2011). Developmental cognitive neuroscience. Malden, MA: Wiley-Blackwell. Karmiloff-Smith, A. (1998). Development itself is the key to understanding developmental disorders. Trends in Cognitive Neuroscience, 2, 389–398. Niedermeyer, E. (2005). Maturation of the EEG: development of waking and sleep patterns. In E. Niedermeyer & F. Lopes Da Silva (Eds.), Electroencephalography: Basic principles, clinical applications, and related fields (pp. 209–234). Philadelphia: Lippincott Williams & Wilkins. Shaw, P., Greenstain, D., Lerch, J., Clasen, L., Lenroot, R., Gogtay, N., Evans, A., Rapoport, J., & Giedd, J. (2006). Intellectual ability and cortical development in children and adolescents. Nature, 440, 676–679. Developmental Dyscalculia ▶ Dyscalculia in Young Children: Cognitive and Neurological Bases Developmental Language Disorders ▶ Language-Based Learning Disabilities Developmental Learning ▶ Learning to Sing Like a Bird: Computational Developmental Mimicry Developmental Psychology of Music CLINT RANDLES1, JULIE DERGES KASTNER2 1 Center for Music Education Research, School of Music, University of South Florida, Tampa, FL, USA 2 College of Music, Michigan State University, East Lansing, MI, USA Synonyms Development of musical experience; Music learning over time; Musical maturation Definition Developmental psychology of music refers to the study of the cognitive and generative processes associated with making music, as these processes occur over time. Theoretical Background Theories of Musical Development Generative Processes. One of the theories in the developmental psychology of music, proposed by Mary Louise Serafine, based loosely on the work of Piaget, is termed Generative Processes. For Serafine, what the profession would call music theory – analyses of the structures of music – was merely thinking about music, rather than thinking in music. She found the latter to be a more valuable path of inquiry when examining musical development. Similar to Gordon’s music learning theory (described later in this paper), Serafine views musical development as best articulated in whole-part-whole progressions. Unlike Gordon, however, Serafine’s theory has not been as systematically tested as some of the other theories of the developmental psychology of music. Symbol System Theories Another of the theoretical areas in the developmental psychology of music is based on symbol systems. This category includes Harvard’s Project Zero and Bamberger’s Theory of Developmental Cumulation. Project Zero. The crux of the work of Harvard’s Project Zero is Howard Gardner’s theory of multiple intelligences, which posits that intelligence can be manifested in a number of different ways, including musically. Another important aspect of this work is the Developmental Psychology of Music exploration of the presumably inherent relationship between musical patterns and the affective life of the individual. Gardner’s work echoed that of Suzanne Langer, whose philosophical writings proposed the idea of multiple symbolic forms. Gardner stressed the importance that music intelligence should be considered in terms of sound, not visual representations of sound. However, some researchers who have followed this line of inquiry have focused more on music intelligence as expressed in terms of musical notation. Bamberger’s Theory of Developmental Cumulation. The focus of Bamberger’s theory is that music is “multiple” and “cumulative.” Musical development in this theory rejects the idea that learning occurs in a unidirectional path, where older mental structures are replaced by newer mental structures. Rather, this theory suggests that multiple dimensions of musical understanding cumulatively build on each other. Music Learning Theory Another theory in the developmental psychology of music is Edwin Gordon’s Music Learning Theory. Music Learning Theory is perhaps the most logical of the theories of musical learning, and perhaps the most controversial. Within this theory, aptitudes for all of the dimensions of music making, most notably rhythm and melody, are possessed by all humans from before birth. The goal of musical instruction within this theory is to from birth, sequentially build musical skills in the attempt to raise student ▶ aptitude before children reach age 9, where their cognitive development becomes essentially wired for the rest of their lives. Much research has been conducted to verify Gordon’s theory in numerous teaching and learning settings. What makes the theory controversial is the rigidity in the methods used to build aptitude, the idea that the focus of music instruction should be to build aptitude, and the centrality of Western music sound structures in the practical application of the skill building portion of the theory. The Developmental Spiral The final theory regarding the development of music learning is the Swanwick and Tillman Developmental Spiral. Central to this theory is the idea that there should be hierarchical educational objectives for music. These categories are: (1) skill acquisition, (2) recognizing and producing expressive gesture, D (3) identifying and displaying the operation of the norms and deviations (form), and (4) aesthetic response. An important component of this theory, articulated by Swanwick, is that all artistic engagements, including music, include the following psychological elements: mastery, imitation, and imaginative play. This theory is often articulated as a spiral model, with each objective being revisited over the course of development. Relationships Among Theories There are a number of conceivable connections between the various theories of the developmental psychology of music learning. Gordon’s Music Learning Theory could be viewed as a more refined and tested version of Serafine’s Generative Processes. The developmental spiral seems to account for all of the theories in scope. Seen in this light, Gordon’s Music Learning Theory would develop the first two categories of the Developmental Spiral. The symbol system theories seem to address the last two categories of the developmental spiral. Musical Development Through Life Prenatal. Musical learning begins before birth, as the fetus interacts with the sounds of her mother’s (1) physical self: heartbeat, breathing, voice, and movements, and (2) emotional self: marked by the exchange of biochemicals associated with emotions. Through this exchange, the fetus begins to process sound at around 16–20 weeks from the moment of conception. At this point in time, the child’s brain, hearing, balance, orientation, movement, and heart rate develop at a (what kind of rate?) rate. Just as a child’s language exposure prepares her for a lifetime of using language, a child’s musical exposure in utero prepares her for a lifetime of interaction with music. Research on prenatal development can be categorized into two categories: scientific-conservative, and romantic-progressive. Work in the scientificconservative area focuses on being objective, impersonal, and detached with regard to research methods utilized and the reporting of data. Examples of this sort of work would be controlled experiments conducted to measure numerically biological processes in as nonevasive of ways as possible. Conversely, work in the romantic-progressive area tends to focus on the subjective, the personal, power of associations, and 967 D 968 D Developmental Psychology of Music spirituality. Work in this area tends to exaggerate fetal abilities and project adult qualities onto the fetus as a way of explaining complex processes. Development from Birth to Age 5 Infants. Although infants begin life without an ability to comprehend musical structures, they are equipped to take in music from the environment through their excellent abilities to hear and discriminate between many musical aspects, regardless initially of the music of their culture. Infants can perceive differences between pitches of a semitone or less and are especially sensitive to differences in melodic contour. Melodic contour is believed to be connected to the pitch changes in mothers’ infant-directed speech. Rhythmically, infants under 1 year of age may not have a fixed sense of meter, and thus, they may be able to hear differences in simple and complex meters found in many non-western cultures. Initial musical interactions, typically through infant-specific musical genres like lullabies and play songs, provide a way for parents to communicate to their nonverbal child, and infants’ ability to aurally discriminate minute musical differences provides a blank slate for them to become acculturated to the music of their culture. Toddlers and Preschoolers. Young children’s musical vocalizations become more diverse during toddlerhood, especially when they have opportunities to hear musical modeling from caregivers. Between 1 and 2years, toddlers begin to explore their voices, create spontaneous songs, and perform musical phrases with repeated rhythmic or melodic ideas. Three-year-old children create longer spontaneous songs, typically weaving in elements from their musical culture. By age 4 and 5, young children can describe intended emotions in their created songs. Overall, this period is marked with the use of vocalizations during times of play as young children create their own songs and chants, adapt existing songs, and create vocal sound effects. School Age Development The musical development of school-aged children encompasses two main periods of growth. First, musical development from the age of 5 until 8 to 10 is concerned with becoming enculturated into the music of their culture and develop basic skills like singing voice and beat competency. Musical development in children from 10 to 18 is marked by greater sense of personal choice and expression of identity. Developments from age 5 to 10. Children from around age 5 to 10 develop musical perceptions and skills consciously and unconsciously as they perceive in the music of their culture. At around age 7, children’s perception of musical elements becomes more complex, and it is believed that they can begin to process multiple elements, like melody and harmony, simultaneously. In their skill development, much of the research has focused on singing. Singing voice development progresses from a speech-like chant, to speech-like singing, to singing with an expanded pitch range and consistent accuracy. Research is inconclusive as to whether children sing more accurately while performing songs with lyrics or on a neutral syllable. By age 6 or 7, children’s vocal range extends to around an octave. By age 11, the majority of children exhibit singing competency, although there appears to be a greater number of competent girls than boys. This may be due to cultural influences that perceive singing as a “female” practice. Developments from age 10 to 20. As children age, they continue to develop more sophisticated musical skills. Some will begin to play musical instruments through the public schools, private lessons, or through informal means. Vocally, boys’ and girls’ voices both change during puberty, although not at consistent rates. Also during puberty, and even beginning earlier, musical preference becomes an important part of musical development. Adolescents begin to look to their peers when making musical decisions, significantly increase their musical listening, and start to view music as a component of their personal and musical identities. Adult Development Musical development in adulthood is marked by individuals’ identifying and coming to terms with their interest in music and perception of musical abilities within their musical culture. For some adults who do not perceive of themselves as having much musical skill, their musical participation may diminish or cease. For other adults, musical participation involves making music professionally or as a hobby. These adults call upon the musical skills and knowledge developed in earlier years to decide their amount and type of musical participation. As adults age, changes in physiology and cognition may make it difficult for them to perform at the same level as they did earlier in their adulthood. Developmental Robotics Important Scientific Research and Open Questions A number of interesting questions remain. Wherever there are theories of learning, there should be empirical tests of those theories. In exploring musical development, more research needs to be developed to understand the acquisition of aesthetic expression. When does it first occur? What causes it to occur? How does learning musical patterns and fundamental elements affect aesthetic expression? In terms of what is known about musical development across the lifespan, more research needs to focus on the development of rhythm and how movement affects this development. Finally, much of the research here has focused on the development of the individual. However, musical development occurs within specific contexts and through interactions with others. More research needs to be conducted to explore how musical interaction, specifically through actively making music with others, affects an individual’s musical development. Cross-References ▶ Shared Cognition ▶ Situated Cognition ▶ Social Construction of Learning ▶ Social Influence and the Emergence of Cultural Norms ▶ Social Interaction Dynamics in Supporting Learning ▶ Social Interaction Learning Styles ▶ Social Interactions and Effects on Learning ▶ Social Interactions and Learning ▶ Social Learning ▶ Social Learning Theory Further Reading Gembris, H. (2002). The development of musical abilities. In R. Colwell & C. Richardson (Eds.), The new handbook of research on music teaching and learning (pp. 3487–508). New York: Oxford University Press. Parncutt, R. (2006). Prenatal development. In G. McPherson (Ed.), Child as musician: A handbook of music development (pp. 1–31). New York: Oxford University Press. Runfola, M., & Swanwick, K. (2002). Developmental characteristics of music learners. In R. Colwell & C. Richardson (Eds.), The new handbook of research on music teaching and learning (pp. 373–397). New York: Oxford University Press. Trehub, S. E. (2006). Infants as musical connoisseurs. In G. E. McPherson (Ed.), Child as musician: A handbook of D 969 musical development (pp. 33–49). New York: Oxford University Press. Welch, G. F. (2006). Singing and vocal development. In G. E. McPherson (Ed.), Child as musician: A handbook of musical development (pp. 311–329). New York: Oxford University Press. D Developmental Robotics PIERRE-YVES OUDEYER INRIA, Talence, France Synonyms Autonomous mental development; Cognitive developmental robotics; Epigenetic robotics; Ontogenetic robotics Definition Developmental robotics is a scientific field which aims at studying the developmental mechanisms, architectures, and constraints that allow lifelong and open-ended learning of new skills and new knowledge in embodied machines. As in human children, learning is expected to be cumulative and of progressively increasing complexity, and to result from self-exploration of the world in combination with social interaction. The typical methodological approach consists in starting from theories of human and animal development elaborated in fields such as developmental psychology, neuroscience, developmental and evolutionary biology, and linguistics, then to formalize and implement them in robots, sometimes exploring extensions or variants of them. The experimentation of those models in robots allows researchers to confront them with reality, and as a consequence developmental robotics also provides feedback and novel hypothesis on theories of human and animal development. Theoretical Background Can a robot learn like a child? Can it learn a variety of new skills and new knowledge unspecified at design time and in a partially unknown and changing environment? How can it discover its body and its relationships with the physical and social environment? How can its cognitive capacities continuously develop without the intervention of an engineer once it is “out of the factory”? What can it learn through natural social 970 D Developmental Robotics interactions with humans? These are the questions at the center of developmental robotics. Alan Turing, as well as a number of other pioneers of cybernetics, already formulated those questions and the general approach in 1950 (Turing 1950), but it is only since the end of the twentieth century that they began to be investigated systematically (Lungarella et al. 2003; Weng et al. 2001; Asada et al. 2009; Oudeyer 2010). Because the concept of adaptive intelligent machine is central to developmental robotics, is has relationships with fields such as artificial intelligence, machine learning, cognitive robotics, or computational neuroscience. Yet, while it may reuse some of the techniques elaborated in these fields, it differs from them from many perspectives. It differs from classical artificial intelligence because it does not assume the capability of advanced symbolic reasoning and focuses on embodied and situated sensorimotor and social skills rather than on abstract symbolic problems. It differs from traditional machine learning because it targets task-independent self-determined learning rather than task-specific inference over “spoon fed human-edited sensori data” (Weng et al. 2001). It differs from cognitive robotics because it focuses on the processes that allow the formation of cognitive capabilities rather than these capabilities themselves. It differs from computational neuroscience because it focuses on functional modeling of integrated architectures of development and learning. More generally, developmental robotics is uniquely characterized by the following three features: 1. It targets task-independant architectures and learning mechanisms, i.e., the machine/robot has to be able to learn new tasks that are unknown to the engineer. 2. It emphasizes open-ended development and lifelong learning, i.e., the capacity of an organism to acquire continuously novel skills. This should not be understood as a capacity for learning “anything” or even “everything,” but just that the set of skills that is acquired can be infinitely extended at least in some (not all) directions. 3. The complexity of acquired knowledge and skills shall increase (and the increase be controlled) progressively. Developmental robotics emerged at the crossroads of several research communities including embodied artificial intelligence, enactive and dynamical systems cognitive science, and connectionism. Starting from the essential idea that learning and development happen as the self-organized result of the dynamical interactions among brains, bodies and their physical and social environment, and trying to understand how this self-organization can be harnessed to provide taskindependant lifelong learning of skills of increasing complexity, developmental robotics strongly interacts with fields such as developmental psychology, developmental and cognitive neuroscience, developmental biology (embryology), evolutionary biology, and cognitive linguistics. As many of the theories coming from these sciences are verbal and/or descriptive, this implies a crucial formalization and computational modeling activity in developmental robotics. These computational models are then not only used as ways to explore how to build more versatile and adaptive machines, but also as a way to evaluate their coherence and possibly explore alternative explanations for understanding biological development (Oudeyer 2010). Important Scientific Research and Open Questions Main Research Directions Research in developmental robotics can be described as organized along three main axes: the domains of skills that shall be learnt by developmental robots, the mechanisms and constraints that allow for developmental learning, and the degree to which these mechanisms and constraints are made bio-mimetic or only functionally inspired. Skill Domains Due to the general approach and methodology, developmental robotics projects typically focus on having robots develop the same types of skills as human infants. A first category that is importantly being investigated is the acquisition of sensorimotor skills. These include the discovery of one’s own body, including its structure and dynamics such as hand–eye coordination, locomotion, and interaction with objects as well as tool use, with a particular focus on the discovery and learning of affordances. A second category of skills targeted by developmental robots are social and linguistic skills: the acquisition of simple social behavioral games such as turn-taking, coordinated interaction, lexicons, syntax and grammar, and the grounding of Developmental Robotics these linguistic skills into sensorimotor skills (sometimes referred as symbol grounding). In parallel, the acquisition of associated cognitive skills are being investigated such as the emergence of the self/non-self distinction, the development of attentional capabilities, of categorization systems and higher-level representations of affordances or social constructs, of the emergence of values, empathy, or theories of mind. Mechanisms and Constraints The sensorimotor and social spaces in which humans and robot live are so large and complex that only a small part of potentially learnable skills can actually be explored and learnt within a lifetime. Thus, mechanisms and constraints are necessary to guide developmental organisms in their development and control of the growth of complexity. There are several important families of these guiding mechanisms and constraints which are studied in developmental robotics, all inspired by human development: 1. Motivational systems, generating internal reward signals that drive exploration and learning, which can be of two main types: (a) Extrinsic motivations push robots/organisms to maintain basic specific internal properties such as food and water level, physical integrity, or light (for e.g., in phototropic systems); (b) Intrinsic motivations push robot to search for novelty, challenge, compression or learning progress per se, thus generating what is sometimes called curiosity-driven learning and exploration, or alternatively active learning and exploration; 2. Social guidance: as humans learn a lot by interacting with their peers, developmental robotics investigates mechanisms which can allow robots to participate in human-like social interaction. By perceiving and interpreting social cues, this may allow robots both to learn from humans (through diverse means such as imitation, emulation, stimulus enhancement, demonstration, etc. . . .) and to trigger natural human pedagogy. Thus, social acceptance of developmental robots is also investigated; 3. Statistical inference biases and cumulative knowledge/skill reuse: biases characterizing both representations/encodings and inference mechanisms can D typically allow considerable improvement of the efficiency of learning and are thus studied. Related to this, mechanisms allowing to infer new knowledge and acquire new skills by reusing previously learnt structures is also an essential field of study; 4. The properties of embodiment, including geometry, materials, or innate motor primitives/synergies often encoded as dynamical systems, can considerably simplify the acquisition of sensorimotor or social skills, and is sometimes referred as morphological computation. The interaction of these constraints with other constraints is an important axis of investigation; 5. Maturational constraints: In human infants, both the body and the neural system grow progressively, rather than being full-fledged already at birth. This implies, for example, that new degrees of freedom, as well as increases of the volume and resolution of available sensorimotor signals, may appear as learning and development unfold. Transposing these mechanisms in developmental robots, and understanding how it may hinder or on the contrary ease the acquisition of novel complex skills is a central question in developmental robotics. From Bio-mimetic Development to Functional Inspiration While most developmental robotics projects strongly interact with theories of animal and human development, the degrees of similarities and inspiration between identified biological mechanisms and their counterpart in robots, as well as the abstraction levels of modeling, may vary a lot. While some projects aim at modeling precisely both the function and biological implementation (neural or morphological models), such as in neurorobotics, some other projects only focus on functional modeling of the mechanisms and constraints described above, and might for example reuse in their architectures techniques coming from applied mathematics or engineering fields. Open Questions As developmental robotics is a relatively novel research field and at the same time very ambitious, many fundamental open challenges remain to be solved. First of all, existing techniques are far from allowing real-world high-dimensional robots to learn an openended repertoire of increasingly complex skills over 971 D 972 D Developmental Science a lifetime period. High-dimensional continuous sensorimotor spaces are a major obstacle to be solved. Lifelong cumulative learning is another one. Actually, no experiments lasting more than a few days have been set up so far, which contrasts severely with the time period needed by human infants to learn basic sensorimotor skills while equipped with brains and morphologies which are tremendously more powerful than existing computational mechanisms. Among the strategies to explore in order to progress toward this target, the interaction between the mechanisms and constraints described in the previous section shall be investigated more systematically. Indeed, they have so far mainly been studied in isolation. For example, the interaction of intrinsically motivated learning and socially guided learning, possibly constrained by maturation, is an essential issue to be investigated. Another important challenge is to allow robots to perceive, interpret, and leverage the diversity of multimodal social cues provided by non-engineer humans during human-robot interaction. These capacities are so far mostly too limited to allow efficient general purpose teaching from humans. A fundamental scientific issue to be understood and resolved, which applies equally to human development, is how compositionality, functional hierarchies, primitives, and modularity, at all levels of sensorimotor and social structures, can be formed and leveraged during development. This is deeply linked with the problem of the emergence of symbols, sometimes referred as the “symbol grounding problem” when it comes to language acquisition. Actually, the very existence and need for symbols in the brain is actively questioned, and alternative concepts, still allowing for compositionality and functional hierarchies are being investigated. During biological epigenesis, morphology is not fixed but rather develops in constant interaction with the development of sensorimotor and social skills. The development of morphology poses obvious practical problems with robots, but it may be a crucial mechanism that should be further explored, at least in simulation, such as in morphogenetic robotics. Similarly, in biology, developmental mechanisms (operating at the ontogenetic time scale) strongly interact with evolutionary mechanisms (operating at the phylogenetic timescale) as shown in the flourishing “evo-devo” scientific literature (Müller 2007). However, the interaction of those mechanisms in artificial organisms, developmental robots in particular, is still vastly understudied. The interaction of evolutionary mechanisms, unfolding morphologies, and developing sensorimotor and social skills will thus be a highly stimulating topic for the future of developmental robotics. Cross-References ▶ Active Learning ▶ Affordances ▶ Artificial Learning and Machine Learning ▶ Cognitive Artifacts and Developmental Learning in a Humanoid Robot ▶ Cognitive Robotics ▶ Curiosity and Exploration ▶ Development and Learning ▶ Human–Robot Interaction ▶ Imitation Learning of Robot ▶ Learning Algorithms ▶ Motor Schemas in Robot Learning ▶ Play, Exploration, and Learning ▶ Robot Learning ▶ Robot Learning from Demonstration ▶ Robot Learning via Human–Robot Interaction ▶ Robot Learning Via Human–Robot Interaction: The Future of Computer Programming References Asada, M., Hosoda, K., Kuniyoshi, Y., Ishiguro, H., Inui, T., Yoshikawa, Y., Ogino, M., & Yoshida, C. (2009). Cognitive developmental robotics: A survey. IEEE Transactions on Autonomous Mental Development, 1(1), 12–34. Lungarella, M., Metta, G., Pfeifer, R., & Sandini, G. (2003). Developmental robotics: A survey. Connection Science, 15, 151–190. Müller, G. B. (2007). Evo-devo: Extending the evolutionary synthesis. Nature Reviews Genetics, 8, 943–949. Oudeyer, P.-Y. (2010). On the impact of robotics in behavioral and cognitive sciences: From insect navigation to human cognitive development. IEEE Transactions on Autonomous Mental Development, 2(1), 2–16. Turing, A. M. (1950). Computing machinery and intelligence. Mind, LIX(236), 433–460. Weng, J., McClelland, J., Pentland, A., Sporns, O., Stockman, I., Sur, M., & Thelen, E. (2001). Autonomous mental development by robots and animals. Science, 291, 599–600. Developmental Science ▶ Developmental Learning Cognitive Neuroscience and Dewey, John (1858–1952) Developmental Teaching A school education approach based on the ideas of cultural-historical psychology and, in particular, on the theory of learning activity (D. Elkonin, V. Davydov). The very name recalls the famous statement by L. Vygotsky about the leading role of teaching in mental development. Developments ▶ Trajectories of Participation - Temporality and Learning Dewey, John (1858–1952) MICHAEL JACKSON Department of Government and International Relations, University of Sydney, Sydney, NSW, Australia Life Dates John Dewey (October 20, 1859–June 1, 1952) was born in Burlington Vermont to a family of modest means. He graduated from the University of Vermont. He taught in a high school for 3 years before attending Johns Hopkins University where he obtained a Ph.D., with a study of Immanuel Kant. He joined the University of Chicago in its earliest days, and later served at the University of Michigan. In 1904, he went to Columbia University and its Teachers College, where he remained. His bibliography runs to more than twenty monographs. His collected works total 37 volumes. He was also actively engaged with many causes of his times and wrote in aid of them, too. As befits one of the founders of pragmatism, he had the reputation of a hard bargainer in money matters. Theoretical Background While his published works range over topics as diverse as art, democracy, logic, ethics, nature, they always come back to education. Inspired by Georg Hegel, he D sought synthesis. He is often linked to William James and Charles Sanders Pierce as a founder of pragmatism, though Dewey used the term “instrumentalism” to describe his epistemology. More than half a century before post modernism, Dewey set out a philosophy without a privileged foundation. He is likewise credited with being one the progenitors of functional psychology and served as president of the American Psychological Association in its early days. One of the tenets of functional psychology was the principle that the environment affects the mind. The social context matters in understanding a mind, and that is the path to understanding the mind. Some of his most widely read books were My Pedagogic Creed (1897), The Child and Curriculum (1902), Democracy and Education (1916), and Experience and Education (1938). He contributed articles to all manner of popular and scholarly magazines and journals and had a high public profile throughout his life, making him a public intellectual in today’s terms. He always considered himself a philosopher more than a psychologist or an educator. The Journal of Philosophy was founded for the purpose of discussing his work. It remains one of the most important of journals for professional philosophers. All through his work, popular and professional, the major concern was practical, how should we live? To answer that question, he studied children, quickly concluding that children shape their environment as readily as it shapes them. They were not passive pieces of white paper upon which to write. No doubt, observation of his own children paved the way for this conclusion. He always stressed the activity of the learner in grasping, in making knowledge, rejecting what he called the spectator approach to teaching which makes the learner into an audience for the possessor of knowledge – the teacher. Were he to observe current practice in university teaching, he would see many presentations both in the class room and on Web sites that make students into an audience. The fundament of his philosophy was that everything had to be tested against reality. Human history and social institution embodied the knowledge gained from that process of evaluation. Mistakes will be made, yes, but over time, they will be revealed by further testing. This empirical, experimental attitude is what he saw in democracy. Democracy was a mentality, not a set of rules about voting, to Dewey. In contemporary 973 D 974 D Diagnosis of Asperger’s Syndrome terms his focus was on civil society rather than government. He attributed an important role to the democratic citizenry in directing government and this brought him into conflict with Walter Lippmann and others who embraced a plebiscitary model of democracy. Elections passed judgment on what had been done in the name of electorate; whereas Dewey saw the democratic process as the means to select which programs, practices, and policies would be pursued. Dewey played a vital role in the creation of the Laboratory School at the University of Chicago in 1896, which school continues to this day. In it the effect of teaching methods was observed, evaluated, and reported. His book School and Society (1900) is one such report. He was also one of the founders of the New School of Social Research in New York City in 1919, which school had enormous influence on the development of the social sciences after World War II. His influence, gained both through his books and articles and his extensive service on public committees, led to a role in the foundation of Bennington College and, thereafter, he served on its Board of Trustees. But perhaps his greatest and most lasting achievement was during his years at Columbia, where he helped make teaching a profession with its own discipline and body of knowledge. He advocated experiential learning, guided by an educated and trained teacher. Such contemporary practices as problem-based learning and its kin are sometimes traced back to Dewey. His willingness to embrace good causes saw him direct the so-called Dewey Commission in Mexico City in 1937, which exonerated Leon Trotsky of Stalin’s many charges. He championed the rights of women in both word and deed, writing many pieces on the subject and marching in parades. Melville Dewey created the Dewey Decimal System for library shelving and he was no relation to John Dewey, though they were contemporaries. Contribution(s) to the Field of Learning While a major figure in his own times with considerable influence in succeeding generations, his books have not aged well. The intelligence and good will of Dewey shines through the pages of these books, but they are discursive and obscure. They are seldom read outside the history of education. Despite his role in the institutionalization of psychology in the United States, his intellectual impact on psychology today is not apparent. On the other hand, the institutions he founded like the Columbia Teachers College, the American Psychological Association, the Journal of Philosophy, and the Chicago Laboratory School have endured, thrived, and changed. Cross-References ▶ Functional Learning ▶ Learning Environment(s) ▶ Pragmatic Reasoning Schemas ▶ Psychology of Learning Further Reading Hickman, L., & Alexander, T. (Eds.). (1998). The essential Dewey (Vol. 1 and 2). Bloomington: University of Indiana Press. McDermott, J. J. (Ed.). (1981). The philosophy of John Dewey. Chicago: University of Chicago Press. The John Dewey Society. (2010). Retrieved 3 April 2011, from http:// doe.concordia.ca/jds/ University of Chicago Laboratory School. (2010). Retrieved 3 April 2011, from http://www.ucls.uchicago.edu/ Diagnosis of Asperger’s Syndrome MICHAEL FITZGERALD Department of Psychiatry, Trinity College Dublin (TCD), Dublin 2, Ireland Synonyms Asperger’s disorder; Asperger’s syndrome; Autistic psychopathy; Criminal autistic psychopathy; High functioning autism Definition Asperger’s syndrome (Autistic Psychopathy) was first described in 1938 by Hans Asperger (Asperger 1938, 1944). He gave a fuller account in 1944 and called the condition Autistic Psychopathy. The name Asperger’s syndrome was given to the condition in 1981 by Lorna Wing (Wing 1981). She noted the: 1. Lack of normal interest and pleasure in people around them 2. A significant reduction in shared interest Diagnosis of Asperger’s Syndrome 3. A significant reduction in the wish to communicate verbally and non-verbally 4. A delay in speech acquisition and impoverishment of content 5. No imaginative play, or imaginative play confined to one or two rigid patterns Van Krevelen and Kuipers (1962, p. 22) pointed out that persons with this disorder showed “personal unapproachability with problems distinguishing between dream and reality.” They also noted that “the eye roams, evades, and is turned inwards.” The speech is stilted, it is not addressed to the person but into empty space. . . it sounds false owing to exaggerated inflection.” In schools, persons with Asperger’s syndrome are loners, often bullied, but can show islets of originality with narrow interest (Fitzgerald 2004). Occasionally, they can show great ability in narrow areas, for example, science, mathematics, etc. (Fitzgerald and James 2007). Examples would be Isaac Newton (Fitzgerald 1999), Albert Einstein (Harpur et al. 2004), and Andy Warhol (Harpur et al. 2004). Theoretical Background In terms of differential diagnosis, Asperger’s syndrome is a developmental disorder and not an illness. In the past it was confused with schizophrenia, Obsessive Compulsive Personality Disorder (which can cooccur), Attention Deficit Hyperactivity Disorder (which can co-occur), Avoidant Personality Disorder, social phobia, and schizoid personality in childhood – a condition in which there is some overlap with Asperger’s syndrome. In terms of persons with Asperger’s syndrome who commit serious crime, the diagnosis of Criminal Autistic Psychopathy (Fitzgerald 2010) has been outlined. This takes up Asperger’s original diagnosis of Autistic Psychopathy and his awareness of the callous unemotional traits that some persons with this condition showed. In terms of these disorders there is a lesser type called Pervasive Developmental Disorder Not Otherwise Specified (DSM-IV-TR, APA 2000) which is less than the full criteria for Asperger’s syndrome. This requires impairment in reciprocal social interaction associated with impairment in communication skills or with stereotyped behavior, interests, or activities. Asperger’s syndrome is on the very wide spectrum of autism. D 975 The DSM-IV-TR (APA 2000) criteria for Asperger’s syndrome remains controversial. Twachtman-Cullen (2001, p. 17) identifies the following difficulties with DSM-IV criteria for Asperger’s syndrome: 1. The DSM-IV definition uses the term “clinically significant general delay in language”, which is open to different interpretations. 2. The milestone of single words at age 2 years, used as an example of normal language development, actually represents a significant expressive language delay. 3. Use of communicative phrases at age 3 years involves not just saying a sequence of words but also communication, meaning the appropriate use of language for social purposes, which is frequently not normal in youngsters with Asperger’s syndrome, even if they speak in phrases or sentences. ICD-10 criteria state that Asperger’s syndrome is “a disorder of uncertain nosological validity” and “differs from autism primarily in that there is no general delay or retardation in language or cognitive development” (Rausch et al. 2008, p. 23). There has been a great deal of research in separating Asperger’s syndrome from autism. It has never been possible to categorically and clinically separate these two conditions (Mayes et al. 2001). It is correct at this point of time that the most accurate diagnosis is Autism Spectrum Disorder which should include Asperger’s syndrome. Nevertheless, throughout the world huge numbers of persons have been given the diagnosis of Asperger’s syndrome which they are happy with and which describes their disability. There are current discussions about deleting Asperger’s syndrome from the American Psychiatric Association DSM-V classification which is due for publication in the next few years. This would not be acceptable because it would cause enormous distress to persons who are satisfied with their diagnosis of Asperger’s syndrome. The change would not assist these people in any way. The use of Asperger’s syndrome as one of the synonyms for Autism Spectrum Disorder would be a satisfactory solution. Hippler and Klicpera (2003, p. 291) point out that a study of “74 clinical case records of children with Autistic Psychopathy (Asperger’s syndrome) diagnosed by Asperger... revealed (at follow-up) that 68% of the sample did meet ICD-10 (International Statistical D 976 D Diagnosis of Asperger’s Syndrome Classification of Diseases) criteria for Asperger’s syndrome, although they construed that 25% fulfilled the diagnostic criteria for autism.” While it has not been possible scientifically to separate Asperger’s syndrome and High Functioning Autism from a clinical perspective, Luke Tsai (2001, p. 5) has identified the following features as being more pronounced in Asperger’s syndrome than High Functioning Autism: 1. Preoccupation with one or more stereotyped and restricted patterns of interest 2. Talking, reading about violence, and death 3. Condescension in behavior 4. Pedantic speech 5. Moody and easily frustrated with tantrums This would support Hans Asperger (1979, p. 45) statement that “it has become obvious that the condition described by myself and Leo Kanner (autism) concerned basically different types, yet in some respects there is complete agreement.” Important Scientific Research and Open Questions Asperger’s syndrome is a developmental disorder which begins with abnormal cell migration in utero in the developing brain. Heritability is 93%. It shows theory of mind and empathy difficulties and abnormal neural connections with excess local connections and reduced long range connectivity. Future research needs to continue efforts to delineate Asperger’s syndrome and the wider ASD spectrum and to subtype the condition. In the long term, genotyping may be able to help in this regard. Rausch et al. (2008, p. 29) note that “future diagnostic language may better distinguish the markedly impaired social interaction of autism from that of Asperger’s cases, where such is not present. In this case the term “marked” could be elaborated to differentiate from the more subtle speech stereotype, idiosyncrasy, or difficulties with conversation maintenance prior to three years described in Asperger’s.” They also point out that “advances in endophenotypy (will) come with advances in diagnostic distinction, but there is also great promise from the potential for advances with psychological and biological phenotyping e.g., performance on theory of mind tasks and facial emotion recognition, coupled with studies of brain activation studied during such tasks” (p. 54). Cross-References ▶ Intact Implicit Learning in Autism ▶ Learning and Consolidation in Autism References American Psychiatric Association. (2000). DSM-IV-TR. Washington, DC: American Psychiatric Association. Asperger, H. (1938). Das psychisch abnormale Kind. Wiener Klinische Wochenschrift, 1314–1317. Asperger, H. (1944). Die autistischen Psychopathen im Kindesalter. Archiv für Psychiatrie und Nervenkrankheiten, 117, 76–136. Asperger, H. (1979). Problems of infantile Autism. Communication, 13, 45–52. Fitzgerald, M. (1999). Did Isaac Newton have Asperger’s disorder? European Child and Adolescent Psychiatry Journal, 8, 244. Fitzgerald, M. (2004). Autism and creativity: Is there a link between autism in men and exceptional ability? New York: Brunner Routledge. Fitzgerald, M. (2010). Young, violent & dangerous to know. New York: Nova Science. Fitzgerald, M., & James, I. (2007). The mind of the mathematician. Baltimore: Johns Hopkins University Press. Harpur, J., Lawlor, M., & Fitzgerald, M. (2004). Succeeding in college with Asperger’s syndrome. London: Jessica Kingsley. Hippler, K., & Klicpera, C. (2003). A retrospective analysis of the clinical case records of “Autistic Psychopaths” diagnosed by Hans Asperger and his team at the University Children’s Hospital Vienna. Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 358(1430), 291–301. Mayes, S., Calhoun, S., & Crits, D. (2001). Does DSM-IV Asperger’s disorder exist? Journal of Abnormal Child Psychology, 29(3), 63. Mesibov, G., Shea, V., & Adams, L. (2001). Understanding Asperger’s syndrome and high functioning autism. Lancaster: Kluwer/ Plenary. Rausch, J., Johnson, M., & Casanova, M. (2008). Asperger’s disorder. New York: Informa. Tsai, L. (2001). From Autism to Asperger’s disorder. In American Academy of Child and Adolescent Psychiatry meeting, Hawaii. Washington DC: American Academy of Child and Adolescent Psychiatry. Twachtman-Cullen, D. (2001). In G. Mesibov, V. Shea, & L. Adams. Understanding Asperger’s syndrome and High Functioning Autism. Lancaster: Kluwer/Plenary. Van Krenelen, A., & Knipers, C. (1962). The Psychopathology of Autistic Psychopathy Acta Paedopsychiatrica, 29, 22–31. WHO. (2004). International classification of diseases (Vol. 10). Geneva: World Health Organisation. Wing, L. (1981). Asperger’s syndrome – A clinical account. Psychological Medicine, 11, 115–129. Diagnosis of Learning Diagnosis of Learning NORBERT M. SEEL University of Freiburg, Freiburg, Germany Synonyms Assessment; Evaluation of learning Definition The word diagnosis is derived through Latin from the Greek word diάgignω̃skein and means to discern or distinguish. Accordingly, a central feature of diagnosis is to distinguish between objects of interest and their features. Basically, diagnosis refers to both a critical analysis of the nature of something and a conclusion reached by the analysis. In human sciences, such as medicine or psychology, the term diagnosis refers to collecting and interpreting information with the aim of determining which nonobservable states can be considered as the “true state of nature.” Often it is not possible to influence or determine what state of nature will occur; but what one can do is collecting and processing data with the aim to arrive at a probabilistic estimation of the true state. This also holds true for learning which can be considered as a change in discrete mental states that are basically not observable. Actually, the true mental states of learners will never be directly accessible, but must be inferred from data gained with the help of particular techniques of measurement and on the basis of theoretical assumptions and hypotheses about the students’ mental states. D example of medical diagnosis, instances of D would be blood pressure, appetite, special aches as well as data from interviews, tests, and specific examinations. The more data the physician obtains the more likely a correct diagnosis will result. Basically, the same holds true for the field of psychology and education where diagnosis of learning consists in proceeding from a relatively diffuse prior probability distribution po(Hi) to a more informative posterior probability distribution pm(Hi). Clearly, this transformation from po(Hi) to pm(Hi) is not unconstrained due to learner characteristics (abilities and skills), organizational conditions, and the curriculum. Human sciences have at their disposal a remarkable wide and varied pool of methods to assess learning and cognition, ranging from naturalistic observation in clinical situations to computer simulation, from traditional experimental methods to performing linguistic analyses, from recording electrical impulses of the brain to measuring reaction times or verbal protocols (Simon and Kaplan 1989). In psychology and education, usually functional approaches of diagnosis are the golden standard. They include (a) experimental methods based on the systematic observation of the learners’ overt behavior when operating with given tasks, (b) protocol analyses including verbal reports, “think-aloud” data, and content analysis of linguistic expressions, and (c) computer-based modeling and simulation. In accordance with psychometric conventions, a basic distinction can be made between (a) responsive (reactive) diagnosis and (b) constructive (nonreactive) agent-based diagnosis. Responsive (Reactive) Diagnosis Theoretical Background Speaking about learning, discussed here in terms of the acquisition, storage, and retrieval of information, is speaking about a theoretical construct, i.e., something which cannot be observed but can be assessed on the basis of observable behavior or verbal statements made in the course of solving particular tasks. The basic assumption of diagnosis is that there are sources of information which provide data D that might be used to modify the initial hypotheses about possible states of nature (e.g., of learning). The processing of D succeeds in transforming the prior probabilities po(Hi) into posterior probabilities pm(Hi). Remaining within the Since the nineteenth century, psychologists have thought that human reaction time provides clues about mental processes and the organization of the mind. Accordingly, reaction times and their changes under different experimental manipulations have been used as evidence for testing hypotheses about processes and structures of the human mind. Measurements of reaction time have been used, e.g., to distinguish between serial and parallel processing of information. Furthermore, this methodology has been applied to research on attention control, information flow, and the acquisition of skills (Marinelli et al. 2010; Robertson 2007). In accordance with the basic assumption 977 D 978 D Diagnosis of Learning that every mental state corresponds with a specific physical state of the brain, cognitive science operates also with psychophysical methods (such EEG or as Positron Emission Tomography, PET) to assess mental states. Since the 1990s, numerous studies have been conducted by using PET-methodologies (e.g., Blaxton et al. 1996; Halsband et al. 1998). Another tradition of responsive diagnosis consists in the use of standardized tests and questionnaires, most often within the realm of experimental research. A standardized test or questionnaire is an assessment which is administered under standardized or controlled conditions that specify where, when, how, and for how long individuals may respond to the questions or “prompts” of the test. The advantages of standardized tests and questionnaires are seen in their objectivity and reliability whereas the most understated risk for a valid interpretation is the error produced by the respondent in accomplishing the test items. Even when the respondent subject is well intentioned and cooperative, several errors, such as awareness of being tested and response sets, may reduce the reliability and validity of the measure. However, what is considered a risk for responsive measurements can be considered an advantage for the nonreactive procedures of knowledge diagnosis. explanations, inferences, hypotheses, speculations, and justifications are considered effective means to assess knowledge. In spite of their indisputable ecological validity, verbal data and protocols have been criticized by some authors (e.g., Nisbett and Wilson 1977) for their deficiencies with regard to psychometric standards of reliability and validity. However, Chi (1997) and Ericsson and Simon (1993) have developed practical guides to quantify verbal data and to control their reliability. Another class of constructive methods of diagnosis operates with a mixture of verbal and graphical representations of knowledge. Concept maps and semantic networks are probably the best representatives for this kind of knowledge assessment. Semantic networks allow to map the relevant structure of knowledge at one time and to suppress irrelevant details. They make the relevant objects and relations explicit and expose natural constraints with regard to “causality,” i.e., how an object or relation may influence another one. In recent years, semantic networks and conceptual graphs have emerged as important and computable tools for diagnosing knowledge about the world (see for an overview Ifenthaler et al. 2010). Constructive Methods of Diagnosis Comparable with the problems of measuring theoretical constructs in general, the assessment of learning is associated with problems of reliability and validity of the applicable psychometric methods. Virtually all learning takes place by talk and text. Accordingly, discourse is an important mediator of learning, and language can be considered as one of the most important windows to the mind. Verbal communication is what subjects regularly use to mediate their ideas, knowledge, thoughts, and feelings. Accordingly, various methods of verbalization play a central role in the diagnosis of individual knowledge. So for example, introspection and introspective data are used as a means of self-reflection in everyday life and in science since the times of Wundt, Titchener, and others pioneers of modern psychology. Gillespie (2007) points out that the idea that thought is a self-reflective internal dialogue goes back, at least, to Plato and can be found until today as a method of self-reflection that can be used for assessing mental states. Closely related with this idea is the use of verbalizations and think-aloud protocols to assess mental states. Besides this “direct communication” of thoughts and ideas by means of verbalizations, more extensive verbal Important Scientific Research and Open Questions Problems of Reliability In test theory, reliability is defined as the degree with which measures do not contain errors: Measurement errors impair the reliability, and thus, the generalizability of measures resulting from a single measurement of an individual. Corresponding with the generalizability theory of Cronbach, errors of measurement are indicative for the control of the situational dependency of measures as well as for the control of temporal stability of measures. The error variance of any psychometric method ðO; ; P; fXi ji 2 I g; QÞ with a distribution Bl from 2 þ 2 Q is a function  2slE : fXi ji 2 I g ! < with slE ðXi Þ ¼ 2 s ðEl Xi Þ ¼ E Ei Xi . The reliability is then a function rlxx : fXi ji 2 I g ! ½0;1, which determinates the true score ΤlXi through rlxx ðXi Þ ¼ s2 ðTl Xi Þ=s2 ðXi Þ. Thus, Diagnosis of Learning the reliability is defined through the proportion of the variance of the true scores and the variance of test data. Actually, this definition of reliability refers to a particular measurement with a test, and under the assumption of consistency, various measurements with the same test should not change the score of the quotient. This presupposed invariance grounds on the individual specific approach which assigns a constant true score ip and an individual error variable eip (with a fixed distribution) to every person for every measurement Xip. A corresponding application of this reliability concept basically presupposes parallel measurements or repetitions of measurement aiming at the comparison of individuals across different situations. Psychometric methods assume basically that a test person demonstrates an individual pattern of stable behaviors in similar situations whereas the person’s behavioral pattern varies across different situations. However, in measuring learning processes and outcomes substantial changes of behavior or knowledge serve as indicator of learning when tests are administered to the same individuals in different situations to different times. Therefore, we need a measurement of change that demands for appropriate test models. Tack (1980) has developed the probabilistic foundations of a test theory for the measurement of change by differentiating between a person-related p-reliability and a situation-related s-change reliability with the aim to provide evidence for effects of individual specific conditions. 1. The n-change value regarding m is a consistent family of Βmn-measurable random variables: Dn=m Xi ¼ Tmn Xi  Tm Xi . 2. The n-change variance regarding m is a þ function s2n=mD  : fXi ji 2 I g!2 < , for which 2 2 sn=mD ðXi Þ ¼ s Dn=m Xi ¼ e Dn=m Xi . 3. The n-change reliability (change sensitivity) regarding m is a function rn=mDD : fXi ji 2 I g ! ½0; 1, for which rn=mDD ðXi Þ ¼ s2n=mD ðXi Þ=s2 ðXi Þ. When we now define m = p and n = s, so that a system Bp of individuals and a system Bs of situations result (with Bp, Bs 2 Q), then the s-change reliability regarding p is exactly the difference resulting from p  sreliability minus p-reliability: rs=pDD ðXi Þ ¼ rspXX ðXi Þ  rpXX ðXi Þ. Furthermore, the s-change reliability regarding p can be maximally one minus the p-reliability: rs=pDD ðXi Þ  1  rpXX ðXi Þ. That is, D the usual (person-related) p-reliability determines the upper limit of the possible situation-dependent change reliability. Finally, the covariance between s-change scores regarding p for different variables of test scores corresponds to the difference of the covariances of p  s-true   2 scores and the p-true scores: s Ds=p Xj ; Ds=p Xk ¼    s2 Tsp Xj ; Tsp Xk  s2 Tp Xj ; Tp Xk . Discussed in terms of variance analyses, the s-error variance corresponds to the variance within the examined situations that are characterized through the realization of a possible variation of conditions, and the variance of the s-true scores corresponds to the variance between the situations. Thus, the s-change reliability regarding p depends upon Bs, i.e., it exists only for a particular kind of the situation-dependent variation. On the whole, this is a practicable procedure to determine the reliability in the context of change measurement as it is necessarily involved in research on learning. Problems of Validity Undoubtedly, the question whether a psychometric method really assess the interesting person variable concerns one of the most central issues of diagnostics. In the frame of differential diagnostics, it constitutes the problem of validity asking for an adequate theoretical interpretation of measured test scores considered representative for a certain population of subjects. However, the diagnosis of learning depends additionally on variations of situations and times. Remaining within the classical test theory, the question for adequate theoretical interpretations of test scores had led to the conception of construct validity whereby construct is usually defined as a theoretical concept to describe the individual’s disposition to be measured. The construct is considered embedded into a network of hypothetical relationships between other concepts, and thus, a test is valid if the test variables indicate covariances with other variables close to the construct in question. Accordingly, structure equations models´ and causal models, respectively, have been proposed (e.g., Eid 1995) to validate causal relations through an analysis of complex linear relations between variables considered partly either as latent or manifest. Clearly, every interpretation of test scores goes beyond the test-related variables: It essentially involves attempts to make accessible variations of latent variables on the basis of the frequency of correct answers in 979 D 980 D Diagnosis of Learning test items. Of course, research interest focuses squarely on true states of nature – a commodity that, if measurement error is large, may differ considerably from observed data. Similarly, when members of a group are changing over time on some important attribute, it is not the fallible observed changes that are of critical interest but the underlying true changes. However, this presupposes (a) a precise clarification of those cognitive processes that occur in a special learning situation, (b) a specification of those variations of personal, situational, and task-related characteristics that determine these cognitive processes, and (c) a specification of the resulting observable data to be interpreted. This immediately leads to several important questions to be answered with regard to the validation of theoretical constructs such as learning: Which cognitive operations and processes can be theoretically justified in order to map the structure of latent variables adequately? Which test situations and items are suitable or can be constructed in order to initiate the aimed cognitive processes? Does the theoretically postulated information processing constitute a homogenous class of test items, i.e., can the test item solutions be explained on the basis of the same theoretical assumptions? Is an empirically reasonable comparability of all answers possible that allows inferences with regard to the internal processes? Are person-specific variations of operational sequences identifiable with regard to single items, and is it possible to assign distinctive characteristics of the person and the given answers to these interindividually different sequences? To give reasonable answers to these questions is the central goal of the approach of cognitively diagnostic assessment (cf. Nichols 1994) according to which the fundamental basis of the validation of theoretical constructs consists of a comprehensive cognitive task analysis. Thus, cognitive task analysis creates a rational basis for the selection of the contents of test items as well as for the expectancy of consistencies of the answers and the prognosis of specific results. The probability of a correct answer is considered then a function of both the faculty of a person and the task characteristics. Measurement of Change Why is measurement of change over time so important in research on learning? The answer is straightforward. When students learn something new, acquire new knowledge, when they modify their preconceptions, they are changing in fundamental and interesting ways that indicate the constructive and cumulative characteristics of learning. Thus, only by measuring individual change is it possible to document each person’s progress in learning. Contrary to a simple misconception according to which individual change has been considered as an increment (i.e., the difference between before and after), individual change in learning takes place in a sequence of discrete steps over time. Therefore, each subject should be measured repetitively over extended periods of time in order to understand the progression of learning. Taking a snapshot of a learner’s observed status before and after an intervention is certainly not the best way to reveal the intricacies of their learning progress. Changes might be occurring smoothly over time with some complex and substantively interesting trajectory. Crude pre-/post-measurement can never reveal the details of that trajectory. Therefore, a truly longitudinal perspective must be adopted. Willett (1988) suggests to assemble and inspect observed growth records for every subject in the dataset (i.e., a graph of observed data displayed against time) in order to provide evidence of the subject’s changing over time. Corresponding to the exploratory inspection of observed growth records and empirical growth trajectories, a formal multi-wave analysis of changes requires that a statistical model is available to represent individual change over time. This model consists of two parts: a structural part that represents the dependence of true states on time, and a stochastic part that represents the random effects of measurement error. Perhaps, the investigator’s most important task is to select the appropriate individual growth model that can be used. Usually, this decision is made via extensive preliminary data-analyses in which the observed growth records are systematically and carefully explored. Then, between-person differences in change can be tested. Willett (1988) has suggested applying ordinary leastsquares regression analysis in order to estimate the within-person growth parameters, and thus, he argues that growth rates provide precise measurements of individual change. Since the 1990s, there are several dedicated computer programs available for simultaneously DICK Continuum in Organizational Learning Framework estimating all of the parameters of growth models, and for providing appropriate standard errors and goodnessof-fit statistics (see, e.g., Kreft et al. 1990). Cross-References ▶ Assessment in Learning ▶ Automated Learning Assessment and Feedback ▶ Dynamic Assessment ▶ Formative Assessment and Improving Learning ▶ Learning Criteria and Assessment Criteria ▶ Measurement of Change in Learning ▶ Models of Measurement of Persons in Situations References Blaxton, T. A., Zeffiro, T. A., Gabrieli, J. D. E., Bookheimer, S. Y., Carrillo, M. C., Theodore, W. H., & Disterhoft, J. F. (1996). Functional mapping of human learning: A positron emission tomography activation study of eyeblink conditioning. Journal of Neuroscience, 16(12), 4032–4040. Chi, M. T. H. (1997). Quantifying qualitative analyses of verbal data: A practical guide. The Journal of the Learning Sciences, 6(3), 271–315. Eid, M. (1995). Modelle der Messung von Personen in Situationen. Psychologie Verlags Union: Weinheim. Ericsson, K. A., & Simon, H. A. (1993). Protocol analysis: Verbal reports as data. Cambridge: The MIT Press. Gillespie, A. (2007). The social basis of self-reflection. In J. Valsiner & A. Rosa (Eds.), The Cambridge handbook of socio-cultural psychology (pp. 678–691). Cambridge: Cambridge University Press. Halsband, U., Krause, B. J., Schmidt, D., Herzog, H., Tellmann, L., & Müller-Gärtner, H. W. (1998). Encoding and retrieval in declarative learning: a positron emission tomography study. Behavioural Brain Research, 97(1–2), 69–78. Ifenthaler, D., Pirnay-Dummer, P., & Seel, N. M. (Eds.). (2010). Computer-based diagnostics and systematic analysis of knowledge. New York: Springer. Kreft, I. G. G., de Leeuw, J., & Kim, K. S. (1990). Comparing four different statistical packages for hierarchical linear regression: Gemmohlm2, andvarc. Los Angeles: Center for the Study of Evaluation, University of California at Los Angeles. Marinelli, L., Perfetti, B., Moisello, C., Di Rocco, A., Eidelberg, D., Abruzzese, G., & Ghilardi, M. F. (2010). Increased reaction time predicts visual learning deficits in Parkinson’s disease. Movement Disorders, 25(10), 1498–1501. Nichols, P. D. (1994). A framework for developing cognitively diagnostic assessments. Review of Educational Research, 64(4), 575–603. Nisbett, R. E., & Wilson, T. D. (1977). Telling more than we can know: Verbal reports on mental processes. Psychological Review, 84, 231–259. Robertson, E. M. (2007). The serial reaction time task: Implicit motor skill learning? Journal of Neuroscience, 27(38), 10073–10075. Simon, H. A., & Kaplan, C. A. (1989). Foundations of cognitive science. In M. I. Posner (Ed.), Foundations of cognitive science (pp. 1–47). Cambridge: The MIT Press. D 981 Tack, W. H. (1980). Zur Theorie psychometrischer Verfahren: Formalisierung der Erfassung von Situationsabhängigkeit und Veränderung. Zeitschrift für Differentielle und Diagnostische Psychologie, 1(2), 87–106. Willett, J. B. (1988). Questions and answers in the measurement of change. Review of Research in Education, 15, 345–422. D Dialogical Mapping Dialogic mapping is a development of conceptmapping technology that allows the connections between ideas to be described in a less controlled way than previously. It encourages dialogic discussion of maps and the description of them in a wide variety of communicative forms that accommodates the modes of thought and expression found in contrasting academic disciplines. Dialogue ▶ Argumentation and Learning DICK Continuum in Organizational Learning Framework MOHAMAD HISYAM SELAMAT, MOHD AMIR MAT SAMSUDIN College of Business, Universiti Utara Malaysia, Sintok, Kedah, Malaysia Synonyms Communication; Direction; Internality; Knowledge; Organizational learning Definition Klimecki and Lassleben (1998) define organizational learning (OL) as a communication-based process where the organization overcomes its previous boundaries of knowledge and ability by allowing its members to share knowledge, interact, influence each other, and cope with difficult situations. On the other hand, Nonaka and Takeuchi (1995) viewed OL as involving 982 D DICK Continuum in Organizational Learning Framework the generation, absorption, and sharing of tacit knowledge, and emphasized the importance of interaction among people toward the development of OL capabilities. In short, OL is the process of continuous innovation through the creation of new knowledge. It is an ongoing process that takes place as staff members engage in knowledge work (Davenport et al. 1998). These views illustrate the importance of having continuous information systems (IS) updates using the medium of communication among the employees. However, to enable the communication process, employees have to be self-confident and to be encouraged to talk to others in the workplace. The lack of confidence and anxiety will demotivate an individual from communicating with others and consequently reduce the effectiveness of the OL framework (Harvey and Butcher 1998). Thus it is declared that individual development is the starting point for OL-based IS. Theoretical Background Tacit knowledge is an individual’s intuition, beliefs, assumptions, and values, formed as a result of experience (Saint-Onge 1996). Augier and Vendelo (1999) argued that due to its transparent and subjective nature tacit knowledge is not easily externalized. Many researchers in the organizational development area (Butcher et al. 1997) feel that there are two major stumbling blocks in involving individuals in the organizational development process: 1. The difficulty of individuals to externalize and share their tacit knowledge (Harvey and Butcher 1998) 2. The difficulty of individuals to obtain information from their colleagues (Butcher et al. 1997; Harvey and Butcher 1998) 3. The difficulty to self-document the externalized and shared tacit knowledge (Selamat and Choudrie 2007) In summary, the capability to externalize, share, and document tacit knowledge is of paramount importance for OL framework. This in turn illustrates that staff members should be instilled with that capability. Therefore, at this point it is declared that individual development should become the starting point in an OL-based IS developmental framework. This entry intends to use this reasoning to illustrate the role of direction (D), internality (I), communication (C), and knowledge (K) in OL. In this entry, it is coined as DICK continuum. This entry argues that staff members need to first and foremost understand their direction within the organization. Being equipped with this value enables staff members to know what should be done when working in the organizations. To assist in this process, three fundamental aspects need to be understood: 1. Task completion – staff members need to realize that behind every single penny that they earn from the company is the responsibility to complete the given tasks (Selamat and Choudrie 2007). When recruited, an individual must be grateful to his/her organization. This acknowledgment should be followed by an inspiration to work hard and smartly for the sake of the company. 2. Task scope – staff members must be able to prioritize organizational tasks (Butcher et al. 1997). To ease the process of understanding task priority, staff members can divide tasks into major or minor categories (Selamat and Choudrie 2007). Major tasks include the superior’s instructions and operational tasks. Minor tasks are ceremonial committee and union affairs. In this case, staff members must be clear about their job specifications so that they will not waste time on tasks that do not contribute to performance recognition. Major tasks must be given a higher priority than minor tasks. 3. Personal targets – staff members need to be clear about their organizational aims and targets in a period of time. Working without aims or targets is like a blind person touching things in a dark room. Aims and targets should be developed to shed light on how to monitor organizational activities and determine future directions (Butcher et al. 1997; Selamat and Choudrie 2007). After determining direction, staff members must have strength to achieve the intended direction. This research proposes eight internal strengths that should be instilled to staff members which are as follows: 1. Personal confidence – it is a self-belief in undertaking and accomplishing organizational tasks (Butcher et al. 1997). As one of the elements that prevent staff members from externalizing and sharing their tacit knowledge is lack of confidence (Harvey and Butcher 1998), this element should be emphasized when understanding an OL framework. DICK Continuum in Organizational Learning Framework 2. Observing accepted organizational approaches – this value allows individuals to create and adapt specific competencies for specific situations (Selamat and Choudrie 2007). By observing accepted organizational approaches, staff members can undertake tasks based upon the right approach for the right situation (Selamat and Choudrie 2007). 3. Undertaking tasks with commitment and selfdiscipline – this value requires every staff member to bear in mind in the workplace that we must do the job with commitment and self-discipline (Selamat and Choudrie 2007). This internal strength is the backbone of enabling knowledge and skills utilization among staff members. 4. Self-awareness – it is an ability to determine the tasks that need to be accomplished at the current time and accomplish the determined tasks according to an accepted organizational approach (Selamat and Choudrie 2007). In other words, it is related to the phrase “do the right things at the right time.” 5. Self-remembrance – the value that requires staff members to mind their actions when undertaking a task so that it can be accomplished effectively and to remember that through their effective actions the company can achieve a good profit and consequently give them a good salary and bonus (Selamat and Choudrie 2007). 6. Compassion – it is defined as having a feeling that the whole organization is like a family (Selamat and Choudrie 2007). Each staff member should appreciate the other members’ efforts because all of them have the same aim and objective in terms of job security. 7. Sincerity – every staff member must have a feeling that he/she works for the sake of the company and for fulfilling his/her responsibility to the company. 8. Willingness to change – this value leads to a continuous improvement in an organization so that its competitiveness does not deteriorate. This is due to rapid changes in the organizational life and business environment. From the aforementioned discussion, eight internal strengths were considered to be relevant in establishing learning environment. These internal strengths can become the elements that can be referred to in understanding situations and taking actions. Additionally, D these values are related closely to OL; therefore, they should be instilled in staff members in order to establish a learning environment. Another element that is proposed by this research when promoting OL is the ability to communicate within the organization. This is because staff members are not alone in the organization and thus need to communicate when undertaking task. This research divides communication into two categories which are: 1. Formal and informal discussions – staff members face various tasks in daily activities (routine, nonroutine, official, and unofficial). To cope with this variety, the integration of formal and informal discussion in handling tasks becomes necessary (Earl and Hopwood 1980; Selamat and Choudrie 2007). Formal approaches are procedures such as meetings, progress reports, and performance evaluation reports. Informal approaches include dialog, faceto-face interaction, corridor meeting, lunch table chats, and coffee/tea table chats. 2. Rational discourse – whenever an IS is applied, it serves some human interests; therefore, the design choices are made to serve some interests at the expense of others and involve moral value judgments. This means that practical advice concerning the design of a learning-based IS must not be limited only to technical aspects, but also address moral issues, such as what is good or bad, or right or wrong in any particular application. Therefore, there is a need to establish a platform to approach such value judgments in a rational way. A rational discourse can legitimize the selection of a design ideal because it ensures that the arguments of all interested parties are heard, that the choice results in an informed consensus about the design ideal, and the formal value choice is only made by the force of the better argument (Klein and Hirschheim 1996). The last element is knowledge (K). Knowledge is defined as a generic knowledge and not as a specific expertise. In addition, it represents the ability to utilize, externalize, and share tacit knowledge in an effective manner. From the aforementioned discussions, it is declared that this entry intends to propose DICK continuum in the process of establishing learning environment. This is something that prior research has not undertaken. This strategy makes this entry unique to that of others. In short, for the first element of this 983 D 984 D DICK Continuum in Organizational Learning Framework entry’s conceptual framework an understanding of DICK continuum is required. The DICK continuum can assist in building a confident and responsible individual (Selamat and Choudrie 2007). These values, in turn, create three important competencies as follows: (1) influencing skills; (2) sharing attitudes; and (3) inquisitive tendencies (Selamat and Choudrie 2007). In other words, influencing skills, sharing attitudes and inquisitive tendencies are the second element of this entry’s conceptual framework. It was also found above that there are problems when developing OL-based IS, which is the need to develop an individual’s ability when externalizing and sharing tacit knowledge. In such an instance, DICK continuum, influencing skills, sharing attitudes and inquisitive tendencies are the humanistic elements that should be considered when considering means of overcoming the difficulties in externalizing and sharing tacit knowledge. This is because by practicing the above influencing skills, sharing attitudes and inquisitive tendencies, individuals can generate creative ideas (I), actions (A), reactions (R), and reflections (R) (Selamat and Choudrie 2007). The terms IARR represent forms of activities within an organization. These activities then allow the externalizing and sharing of tacit knowledge that can provide synergistic inputs for a continuous development of IS. Therefore, the IARR continuum is the third element of this entry’s conceptual framework. However, for this the tacit knowledge must be initially documented. This can be achieved by the value of self-documentation, which is also developed by DICK continuum (Selamat and Choudrie 2007). Due to the development of the elements of DICK, the willingness to question implicit assumptions, explore new possibilities, and directing energies toward higher standards enables a staff member to be well prepared and to use good quality documented progress reports or working papers. In the longer term, this then ensures that there is a tangible means of verifying and validating tacit knowledge. Therefore, tacit knowledge documentation is the fourth element of this entry’s conceptual framework. Reflecting on the above discussion, it can be determined that individual development is the starting point of an OL framework. Additionally, it can be learnt from the previous discussion that DICK continuum should become the starting point for the individual development. As mentioned above, externalized and shared tacit knowledge must be documented. This knowledge is then transformed into explicit knowledge (e.g., through business reports, written descriptions, or instructions). All these self-documentation is then given to the systems officers. At this stage, the systems officers study the documented inputs provided by staff members and codify them. By the time the inputs are transformed into codified domains within the systems, they become information that assist staff members in fulfilling their responsibility. This is the fifth element of this entry’s conceptual framework. To understand the relationship between the earlier mentioned five elements a diagrammatic representation has been developed, which is illustrated in Fig. 1. As shown in the diagram, individual development is initially fostered by the elements of direction, internality, communication, and knowledge (DICK continuum). In this case, the first element of the framework is represented by Stage A in the diagram. As the elements of DICK enable the use of knowledge and skills in an effective manner, they are pertinent for the development of influencing skills (Stage B), sharing attitudes (Stage C), and inquisitive tendencies (Stage D) (Selamat and Choudrie 2007). These three stages represent the second element of the OL framework. When undertaking influencing, sharing, and inquiring activities, an individual implicitly expresses tacit knowledge. This expression is either in physical form (actions and reactions) or verbal form (ideas and reflection) (Selamat and Choudrie 2007) (Stage E). The continuum of ideas, actions, reactions, and reflections provides externalized tacit knowledge for OL-based IS development (Selamat and Choudrie 2007). Stage E of the diagram represents the third element of the framework (as noted above). However, the externalized ideas, actions, reactions, and reflections must initially be documented. This process is undertaken at Stage F, and it represents the fourth element of the above theoretical framework. The documented inputs provided by staff members can be transformed into codified domains within the systems (system database) or compiled in files (Stage G). In turn these databases or files can be utilized to refine decision and develop strategies for future development. Through this process, an individual’s understanding of the organization’s activities (tacit knowledge) is also enriched. This new understanding in turn becomes a platform for a continuous Didactics Knowledge (K) Stage B: Influencing skills Stage A: D-I-C • Direction • Internality • Communication Stage C: Sharing attitudes D Organization Problematic situations Stage D: Inquisitive tendencies Stage E: I-A-R-R Continuum • Ideas • Actions • Reactions • Reflections Internalization Stage F: Knowledge documentation Stage G: System database Information dissemination Stage H: Organizational strategies/operation/approach improvement DICK Continuum in Organizational Learning Framework. Fig. 1 Framework for a continuous organizational learning learning process. In the diagram, this process is represented by Stage H. Stages G and H represent the fifth element of the aforementioned theoretical framework. Cross-References ▶ Knowledge Management ▶ Organizational Change and Learning ▶ Shared Cognition References Augier, M., & Vendelo, M. T. (1999). Networks, cognition and management of tacit knowledge. Journal of Knowledge Management, 3(4), 252–261. Butcher, D., Harvey, P., & Atkinson, S. (1997). Developing business through developing individuals. Cranfield: Cranfield University. Davenport, T. H., De Long, D. W., & Beers, M. C. (1998). Successful knowledge management projects. Sloan Management Review, 39(2), 43–57. Earl, M. J., & Hopwood, A. G. (1980). From management information to information management. In L. Lucas & S. Lincoln (Eds.), The information systems environment. Amsterdam: NorthHolland. 985 Harvey, P., & Butcher, D. (1998). Those who make a difference: Developing businesses through developing individuals. Industrial and Commercial Training, 30(1), 12–15. Klein, H. K., & Hirschheim, R. (1996). The rationality of value choices in information systems development. Foundations of Information Systems. http://www.cba.uh.eduparks/fis/ kantpap.htm. Accessed 15 Sept 2002 Klimecki, R., & Lassleben, H. (1998). Modes of organizational learning: Indications from an empirical study. Management Learning, 29(4), 405–430. Nonaka, I., & Takeuchi, H. (1995). The knowledge creating company. New York: Oxford University Press. Saint-Onge, H. (1996). Tacit knowledge: The key to the strategic alignment of intellectual capital. Strategy and Leadership Journal, 24(2), 10–14. Selamat, M. H., & Choudrie, J. (2007). Using meta-abilities and tacit knowledge for developing learning based systems: A case study approach. The Learning Organization, 14(4), 321–344. Didactics ▶ Choreographies of School Learning D 986 D Didactics, Didactic Models and Learning Didactics, Didactic Models and Learning KARL-HEINZ ARNOLD Department of Applied Educational Science, Institute of Education, University of Hildesheim, Hildesheim, Germany Synonyms Classroom teaching and learning; Curriculum development; Designing lessons; General education; Lesson planning analysis of single acts of teaching and learning is not intended. Therefore, empirical research on teaching and learning can been seen as complementary micro theories modeling the processes of teachers’ delivery of didactically planned lessons and students’ performance of the intended (and unintended) learning tasks. From the perspective of methodology, general didactics serves as a scientific means of decision making in educational planning at different levels. It is regarded as a value-laden, prescriptive theory with a normative background, whereas instructional science preserves descriptive theories on classroom teaching and learning. Theoretical Background Definition The word didactics comes from the Greek word “didάskein” (didáskein), which means teaching. The scientific term didactics (sometimes also spelled “Didaktik” as in German) stems from the German tradition of theorizing classroom learning and teaching. It is a singular noun spelled in the plural form, indicating that connotations to the somewhat pejorative English word “didactic” (text overburdened with instructive matter or oversimplifying way of teaching) are not intended. Didactics serves as a major theory in teacher education and syllabus development, especially in the German-speaking and Scandinavian countries, as well as in Finland (didaktiikka) and in Russia (didaktika). With a slightly different meaning, it is also employed in France (didactique comparée) and Spain (didáctica general) as well as in the Dutch (algemene didactiek) and Africaans literature. General didactics represents the overarching theory of both decision making on and processes of teaching and learning in societal institutions (especially in schools and universities devoted to general and domain-specific education), whereas subject-matter didactics covers the theories of teaching and learning a particular school subject. The two disciplines are considered complementary; it also can be argued that general didactics provides the common framework for the particular purposes of teaching school subjects. Regarding the conceptual level of operationalization, general didactics as a macro theory of classroom teaching and learning is concerned with decision making on general goals as well as specific objectives on content and methods of instruction; the description and The foundational period of didactics lies in the seventeenth century and is associated with the work of Johann Amos Comenius and Wolfgang Ratke. Comenius worked out a comprehensive concept for teaching in public schools: Didactica Magna (1657) intended “to teach everybody everything completely, quickly, pleasantly, and thoroughly.” Comenius also penned the famous Orbis Sensualium Pictus (1658). This textbook with short chapters was written in German; Latin translations of the key terms were added, and figures illustrated the topics. At the end of the eighteenth century, educators from the Philanthropist movement (e.g., Basedow, Campe) made significant contributions to subjectmatter education, especially in the natural sciences, and to teaching methods that were practiced in newly founded, innovative schools. It was Johann F. Herbart (1800), an academic disciple of the German philosopher Immanuel Kant, who not only outlined education as a scientific discipline but also provided a sequencing model of teaching lessons (= Artikulation) that still receives a lot of attention in the European and USAmerican literature. Instructional science also refers to this model since it is based on a more psychological (i.e., cognitive) approach to learning. Herbart’s successors (i.e., the Herbartians, e.g., Tuiskon Ziller, Wilhelm Rein) watered down these steps to form a rigid sequencing of lessons. Herbart’s notion of schooling as “general education” (= [Allgemein-]Bildung) also provides a basis for the comprehensive concept of “general didactics.” Otto Willmann (1889) made the fruitful distinction between “content of education” (= Bildungsinhalt) and Didactics, Didactic Models and Learning “substance of content” (= Bildungsgehalt). He recommended including only topics that are educationally valuable (= content of education) in the syllabus, because learning about those topics draws on their inherent “substance of content.” Erich Weniger, an academic disciple of Herman Nohl whose name is associated with the Human Science Theory of Education in the first part of the twentieth century, outlined a comprehensive theory of syllabus development as a societal task. Weniger (1930) defined didactics as the theory of educational content and syllabus. Referring to Willmann and Weniger, Wolfgang Klafki (1959) framed his concept of Categorical Education (= Kategoriale Bildung), which he set up against the restrictions of both “formal education” (i.e., enhancing the methods of learning and thinking) and “material education” (i.e., covering a comprehensive range of topics). The Exemplar Approach (Klafki 1991) referred to Bruner’s concept of “learning by discovery” and thereby also applied a more psychological concept of transfer. It can be argued that Klafki’s (1994) notion of “Bildung” as the potential for selfdetermination, codetermination (= participation), and solidarity has some strong links to generalized learning results as it is captured in both the process of transfer and competency constructs (Arnold 2007). Weniger and his academic disciple Klafki are regarded as the founders of the Bildung-Centered Approach to Didactics. In his famous paper “Didactic analysis as the core of preparation of instruction,” Klafki (1958/1995) succeeded in transposing Weniger’s basic ideas of syllabus decision making to the level of lesson preparation. Teachers are considered as partly autonomous educators who are responsible for the children taught in their classrooms and therefore must reflect critically on the syllabus. The core questions of the didactic analysis refer to the (2) current and the (3) future significance of the chosen content for the students, which means that teachers should reflect on the educational substance of the mandatory ▶ curriculum. Question 1 focuses on the aforementioned exemplarity of the chosen topic, question 4 on its structuredness, and question 5 on the accessibility of the topic (= methodical aspects of teaching). In the 1950s and 1960s, Paul Heimann, professor at the teacher education college of West Berlin, devised the Learning-Centered Approach to General Didactics, D putting it against the emerging tradition of the Bildung-Centered Approach. He argued that university teacher education and its newly introduced internships should be based on scientific analysis and therefore should rely on empirical learning theories. This approach was further developed by Heimann’s student Wolfgang Schulz, who established the more comprehensive Berlin Model of Lesson Planning (1965), which has – as an alternative to Klafki’s model – found wide use in German teacher education. Schulz discerned four fields of decision making: (1) intentions; (2) themes, topics; (3) methods; and (4) media. The usage of this fourfold scheme should be guided by three principles of lesson planning: (a) interdependency of the fields, (b) variability of the lesson plan, and (c) controllability (analysis of differences between the lesson plan and the lesson delivered). Two fields of conditions are to be considered: (a) sociocultural and (b) individual conditions of the students, which involve both adaptivity to their prerequisite knowledge and a look at the societal demands of education. After moving to the University of Hamburg, Schulz (1980) revised his model and called it the Hamburg Model of Lesson Planning. Partly adopting Klafki’s work, Schulz focused more on teachers’ critical reasoning on the societal dimension of schooling. Schulz also emphasized the interactive nature of classroom teaching by introducing Cohn’s Theme-Centered Interaction (TCI) into his model. In accordance with curriculum theory, he distinguished between three levels of educational objectives (Möller 1973): (1) far-reaching educational goals, (2) broadly defined educational objectives, and (3) narrowly defined educational objectives. The latter introduced Robert Mager’s (1962) method of operationalization, which has been criticized as narrowing the educative (= Bildungcentered) purposes of schooling. Around the beginning of the 1970s, a controversial debate on the two major models was going on in Germany, although Klakfi as well as Schulz never saw them as being opposed to each other. Under the impression of the 1968 student protest movement, Klafki further developed his model to the Perspective Schema of Lesson Planning (1980, 1994) and formulated its background in the Critical Constructive Approach to Didactics (Klafki 1985, 1994). In doing so, he renewed the traditional concept of general education by relating it to the Critical Theory of Society of the Frankfurt 987 D 988 D Didactics, Didactic Models and Learning School of social research (e.g., Horkheimer, Habermas). The model established emancipation as the central goal of education, encompassing the three facets of “Bildung.” He also incorporated some central features of Schulz’s model (accessibility and ways of presentation, e.g., choice of media; evidence of understanding and learning outcomes, e.g., assessment of learning outcomes). General didactics and its notion of both general education and interdisciplinary instruction received a general thematic framework in Klafki’s (1994) Epoch-Making Key Problems of the Modern World (e.g., the question of peace; the environment; the unequal distribution of wealth, employment, and unemployment; freedom and participation; the relationship between the generations and between men and women; interaction with those who have special needs and with ethnic minorities). It has been argued (Arnold and Koch-Priewe 2011) that teaching subject matter with regard to key problems also contributes to a worldwide curriculum and provides a concept of global education. A more psychological approach to general didactics has been developed by the Swiss researcher Hans Aebli (1983/1998), a student of the developmental psychologist Jean Piaget. One main feature of his work was the identification of 12 teaching methods (e.g., narration and reporting, establishment of a concept, flexible work-through, practice and rehearsal). Aebli did not refer to the more content-focused approaches of his German colleagues Klafki and Schulz, who likewise ignored Aebli’s work. Rudolf Messner and Kurt Reusser (2006), academic disciples of Aebli, broadened his approach. Aebli’s constructivist understanding of learning has been further developed by Oser and Baeriswyl (2001), who initially distinguished 12 (and later even more) basis models of learning scripts that are meant to explain why aspects of the sight structure of teaching are effective. Since the 1990s, constructivist conceptions have received a lot of attention in subject-matter didactics, especially in mathematics and science education. Reich (2006) framed a constructivist approach to general didactics that expanded on Dewey’s notion of learning experience, which also provides one of the two basic concepts of a new approach to general didactics developed by Meinert Meyer (2007) and colleagues. They placed the notion of developmental tasks in psychology in relation to the aims of general education, which call for lesson planning to be individualized, i.e., adapted to the needs and experiences of each student. Important Scientific Research and Open Questions There is not much empirical research on the concepts of general didactics; their explanatory power in research on teaching and curriculum development has scarcely been analyzed at all. It could be shown, however, that all-day lesson planning by teachers is at least implicitly in line with the basic concepts of Klafki’s didactic analysis (Koch-Priewe 2000) and that classroom learning tasks can be categorized by Bildungcentered concepts (Bloemeke et al. 2006). A lot of textbooks on general didactics have been published in the German language; almost all of them tackle the two major models described above. It is worth noting that neither Schulz nor Klafki wrote monographs on their approaches. Some of Klafki’s articles have been translated into English and other languages; his work is well known in Scandinavia, Finland, and Japan. The work of Schulz remains accessible only to German-speaking scholars and teachers. Yet some remarkable writings on the German tradition of general didactics are available in English. Oser and Baeriswyl (2001) wrote a chapter in the third edition of the famous Handbook of Research on Teaching. Westbury and Hopmann (2000) provided an anthology on the “German Didaktik Tradition” containing translations of some of Weniger’s and Klafki’s central articles. In the area of curriculum theory, some US (e.g., Deng and Luke 2008) and Finnish authors (e.g., Uljens 1997) have highlighted the profound significance general didactics attaches to lesson planning. A subdivision of the European Educational Research Association (EERA), “Didactics – Learning and Teaching,” was founded in 2005. Arnold and Koch-Priewe (2011) suggested a merger and integration of the didactics tradition with empirical instructional research. Some crucial problems still remain under debate: (a) How do the concepts of syllabus and curriculum relate to education standards that rely on competency descriptions? (b) What concepts can be shared by general and subject-matter didactics? (c) How much can instructional effectiveness be improved by promoting teachers’ planning abilities? Didactics, Didactic Models and Learning Cross-References ▶ Adaptive Instruction Systems and Learning ▶ Aligning the Curriculum to Promote Learning ▶ Content-Area Learning ▶ Curriculum and Learning ▶ Generative Teaching: Improvement of Generative Learning References Aebli, H. (1998). Zwölf Grundformen des Lehrens. Eine allgemeine Didaktik auf kognitionspsychologischer Grundlage [Twelve basic forms of teaching. An approach to General Didactics founded on Cognitive Psychology; 1st ed.: 1983] (10th ed.). Stuttgart: KlettCotta. Arnold, K.-H. (2007). Generalisierungsstrukturen der kategorialen Bildung aus der Perspektive der Lehr-Lernforschung [Generalizing structures of Categorical Education: The view of empirical research on learning and instruction]. In B. Koch-Priewe, F. Stübig, & K.-H. Arnold (Eds.), Das Potenzial der Allgemeinen Didaktik (pp. 28–42). Weinheim: Beltz. Arnold, K. H., Koch-Priewe, B. (2011). The merging and the future of the classical German traditions in General Didactics: A comprehensive framework for lesson planning. In B. Hudson, M. A. Meyer (Eds.), Beyond fragmentation: Didactics, learning and teaching in Europe (pp. 252–264). Opladen: Budrich. Deng, Z., & Luke, A. (2008). Subject matter: Defining and theorizing school subjects. In F. M. Connelly, M. F. He, & J. Phillion (Eds.), The Sage handbook of curriculum and instruction (pp. 66–87). Los Angeles, CA: Sage. Klafki, W. (1959). Das pädagogische Problem des Elementaren und die Theorie der kategorialen Bildung [The pedagogical problem of the elementary and the theory of categorical eduction]. Weinheim: Beltz. Klafki, W. (1985). Grundlinien kritisch-konstruktiver Didaktik [Basic aspects of the Critical-Constructive Approach to General Didactics]. In W. Klafki (Ed.), Neue Studien zur Bildungstheorie und Didaktik (pp. 31–86). Weinheim: Beltz. Klafki, W. (1991). Exemplar approach. In A. Lewy (Ed.), The international encyclopedia of curriculum (pp. 181–182). Oxford: Pergamon. Klafki, W. (1994). Grundzüge eines neuen Allgemeinbildungskonzepts. Im Zentrum: Epochaltypische Schlüsselprobleme [Essential features of a new concept of general education. In focus: epoch-making key problems]. In W. Klafki (Ed.), Neue Studien zur Bildungstheorie und Didaktik. Zeitgemäße Allgemeinbildung und kritisch-konstruktive Didaktik (4th rev. ed., pp. 43–82). Weinheim: Beltz. Klafki, W. (1980/1994). Zur Unterrichtsplanung im Sinne kritischkonstruktiver Didaktik [Lesson planning in the critical-constructive approach to general didactics; first published: 1980]. In W. Klafki (Ed.), Neue Studien zur Bildungstheorie und Didaktik (4th ed., pp. 251–284). Weinheim: Beltz. Koch-Priewe, B. (2000). Zur Aktualität und Relevanz der Allgemeinen Didaktik in der LehrerInnenausbildung [The D relevance of General Didactics to teacher education]. In M. Bayer, F. Bohnsack, B. Koch-Priewe, & J. Wildt (Eds.), Lehrerin und Lehrer werden ohne Kompetenz? Professionalisierung durch eine andere Lehrerbildung (pp. 148–170). Bad Heilbrunn: Klinkhardt. Mager, R. F. (1962). Preparing objectives for programmed instruction. San Francisco: Fearon. Messner, R., & Reusser, K. (2006). Aeblis Didaktik auf psychologischer Grundlage im Kontext der zeitgenössischen Didaktik [Aebli’s psychologically based approach to General Didactics in the context of contemporary theories on Didactics]. In M. Baer, M. Fuchs, P. Füglister, K. Reusser, & H. Wyss (Eds.), Didaktik auf psychologischer Grundlage (pp. 52–73). Bern: h.e.p. Verlag. Meyer, M. A. (2007). Didactics, sense making, and educational experience. European Educational Research Journal, 6(2), 161–173. Möller, C. (1973). Technik der Lernplanung. Methoden und Prinzipien der Lernzielerstellung [Techniques of planning learning. Methods and principles of preparing learning objectives] (4th ed.). Weinheim: Beltz. Oser, F. K., & Baeriswyl, F. J. (2001). Choreographies of teaching: Bridging instruction to learning. In V. Richardson (Ed.), Handbook of research on teaching (4th ed., pp. 1031–1065). Washington, DC: American Educational Research Association. Schulz, W. (1965). Unterricht – Analyse und Planung [Analysis and planning of lessons]. In P. Heimann, G. Otto, & W. Schulz (Eds.), Unterricht – Analyse und Planung (pp. 13–47). Hannover: Schroedel. Schulz, W. (1980). Ein Hamburger Modell der Unterrichtsplanung. Seine Funktion in der Alltagspraxis [The Hamburg model of lesson planning: Its functioning in everyday teaching]. In B. Adl-Amini (Ed.), Didaktische Modelle und Unterrichtsplanung [Models of general didactics and lesson planning] (pp. 49–87). Weinheim: Juventa. Weniger, E. (1930/2000). Didaktik as a theory of education [1st ed.: Theorie der Bildungsinhalte und des Lehrplans. Weinheim: Beltz, 1930]. In I. Westbury, S. Hopmann, & K. Riquarts (Eds.), Teaching as a reflective practice. The German Didaktik tradition (pp. 111–126). Mahwah: Erlbaum. Westbury, I., Hopmann, S., & Riquarts, K. (Eds.). (2000). Teaching as a reflective practice. The German Didaktik tradition. Mahwah, NJ: Erlbaum. Willmann, O. (1889/1967). Didaktik als Bildungslehre nach ihren Beziehungen zur Socialforschung und zur Geschichte der Bildung [Didactics as a theory of education according to its relations to social research and to the history of education; 1st ed.: Braunschweig: Vieweg, 1889]. (7th ed.). Freiburg: Herder. Uljens, M. (1997). School didactics and learning. A school didactic model framing an analysis of pedagogical implications of learning theory. Hove: Psychology Press. Further Reading Hopmann, S., & Keitel, C. (1995). Editorial: The German Didaktik tradition. Journal of Curriculum Studies, 27(1), 1–2. Hudson, B., Buchberger, F., Kansanen, P., & Seel, H. (Eds.). (1999). Didaktik/Fachdidaktik as science(s) of the teaching profession? (TNTEE Publications Vol. 2, No. 1; available under http://tntee. 989 D 990 D Difference umu.se/publications/publication2_1.html, 2010-12_14, accessed 14 Dec 2010). Umeå: TNTEE, University of Umeå. Kansanen, P. (2002). Didactics and its relation to educational psychology: Problems in translating a key concept across research communities. International Review of Education, 48(6), 427–441. Klafki, W. (1995). Didactic analysis as the core of preparation of instruction. Journal of Curriculum Studies, 27(1), 13–30. criminology has had impact on how people reflect on crime as has differential association. In the words of Cressey (1952), differential association is the most outstanding sociological formulation of a general theory of crime causation. Theoretical Background Difference ▶ Simultaneous Discrimination Learning in Animals Differential Access to Learning Skills ▶ Differential Association Theory Differential Association Theory THOMAS ANTWI BOSIAKOH Department of Sociology, University of Ghana, Legon Accra, Ghana Synonyms Differential access to learning skills; Differential association to learning skills Definition Differential association is a crime predictive theory. It can be defined as a process by which individuals come to have differential access to criminal values through interaction with other people. The theory holds that, criminal behavior is learned in the same way that lawabiding values are learned, and that, this learning activity is accomplished, in interactions with others, and the situational definitions we place on the values. The theory can be reduced to the notion that, people become criminals because they associated with, and absorbed pro-criminal definitions. Differential association has attracted more attention, over a longer period of time, than any other criminological theory. According to some scholars, no single idea in modern Edwin H. Sutherland was one of the sociologists from the famous Chicago school. In the 1920s and 1930s, the study of crime was almost like a tale of one city, Chicago. At the University of Chicago, sociologists devoted enormous attention to the study of crime. A general conclusion easily gleaned from these studies is their attribution of crime causation to external factors. Shaw and McKay for example attributed crime to social disorganization. Sutherlands criticized Shaw and McKay’s concept of social disorganization as a variable for crime causation, and replaced it with his own term, the differential social organization. It was through this concept of differential social organization that Sutherland developed his differential association theory. It is also true from the work of Sutherland that, differential association theory was developed in an attempt to explain career criminal behavior. Sutherland first presented differential association theory in 1939, and in 1947, revised it. The theory consists of nine principles, as outlined below: 1. Criminal behavior is learned; it is not inherited. With this principle, Sutherland rebuffed the argument that, crime was the outcome of social disorganization. He also rejected the view that, criminals were biologically different from noncriminals. Sutherland sought to explain with this principle that, a person who has not been trained in criminal acts does not invent such acts, just as a child does not make courteous remarks unless they have been socialized as such. 2. Criminal behavior is learned in interaction with others through communication. Sutherland suggested with this principle that, criminal behavior is acquired through association with others which also includes communication. The use of communication here refers to the sum total of interactions. This communication is verbal in many respects but includes also the communication of gestures often described as nonverbal communication. Differential Association Theory 3. The principal part of learning criminal behavior occurs in intimate groups. Sutherland argued that, only small, face-to-face groups influence behavior. For this reason, he placed enormous emphasis on peer and family groups as the most likely sources of initiation into delinquent values and activities. From this analysis, Sutherland discounted impersonal agencies of communication such as picture shows and newspapers as part of the process for the initiation into criminality. 4. When criminal behavior is learned, the learning includes (a) techniques, which are sometimes complicated, and sometimes very simple; and (b) the specific direction of motives and drives, rationalizations and attitude. By this principle, Sutherland explained the specific items that go into learning criminal behavior including the skill (procedure/method), motivation (the driving force), justification (reasoning), and the general behavioral outlook that supports criminality. 5. The specific direction of motives and drives are learned from definitions of legal codes as favorable or unfavorable. Again, the learning here is influenced by other people (mainly those with whom one has intimate relations). If such people define the law as deserving/nondeserving to be observed, corresponding attitudes toward the law are learnt. 6. A person becomes criminal because of excess of definitions favorable to the violation of law over definitions unfavorable to the violation of law. This principle can be expressed as a ratio between favorable definitions and unfavorable definitions to the violation of criminal law. If the ratio is toward favorable definitions, violation of the law will be impeded, and if it is toward unfavorable definitions, the person will violate the law. 7. Differential association (tendency toward criminality) varies in frequency, duration, priority, and intensity. This means that, the earlier in one’s life, the longer the time, the more intensely and more frequently people are exposed to a set of attitudes about criminality, the more likely they will become criminals themselves. 8. The process of learning criminal behavior involves the same (all the) mechanisms involved in any other learning. This means that, the mechanisms for learning criminal behaviors are the same as those D for noncriminal behaviors/any law-abiding behavior and even social skills. However, the content and motive of what is learnt are entirely different in the two situations. 9. Both criminal and noncriminal behaviors are expressions of the same needs and values. Put differently, the goals of criminals and noncriminals are usually the same. What is different is the means they adopt to pursue this goal. For instance, thieves generally steal in order to secure money but honest citizens also work for it. Important Scientific Research and Open Questions Since 1947 when differential association theory was reformulated, a number of attempts have been made to empirically test and or apply the theory to different criminal behaviors (see Antwi Bosiakoh and Andoh 2010, pp. 200–201). Perhaps the earliest of these was Donald Cressey’s application and verification attempt in 1952. Since then, several studies have been undertaken for empirical clarification (Tittle et al. 1986), to test the theory as a hypothesis (Short 1960) and to verify the applicability of the theory in relation to specific criminal behaviors (Glaser 1960; Voss 1964; Antwi Bosiakoh and Andoh 2010). While many of these studies found empirical application for the main argument of differential association theory, that criminality is learned, and in most cases found the theory to be superior to alternative theories on crime causation and crime prediction, some of these studies have suggested modifications in the principles and even the term “differential association.” Burgess and Akers (1966) have provided a reformulated version of differential association theory with the view to incorporate reinforcement theory. In the process, they reduced the nine principles of differential association theory into seven and renamed the theory as differential associationreinforcement theory of criminal behavior (Burgess and Akers 1966, cited in Antwi Bosiakoh and Andoh 2010). While arguing that there have been renewed interest in differential association, some scholars have also suggested differential identification as a modification to differential association to reconceptualize Sutherland’s theory in role construction and role reconstruction imageries (Matthews 1968; Glaser 1956, cited in Antwi Bosiakoh and Andoh 2010). In 1960, Daniel Glaser acknowledged 991 D 992 D Differential Association to Learning Skills that, differential association is superior to alternative theories of criminological prediction, but quickly suggested that differential anticipation theory would be more appropriate and adequate than differential association (Glaser 1960, p. 13). Cross-References ▶ Social Learning Theory ▶ Sutherland, Edwin H. (1883–1950) ▶ Value Learning References Antwi Bosiakoh, T., & Andoh, P. (2010). Differential association theory and juvenile delinquency in Ghana’s capital city – Accra: The case of Ghana Borstal Institute. International Journal of Sociology and Anthropology, 2(9), 198–205. Burgess, R. L. & Akers, R. L. (1966). Differential association-reinforcement theory of criminal behavior. Social Problems, 14(2), 128–147. Cressey, D. R. (1952). Application and verification of the differential association theory. The Journal of Criminal Law, Criminology, and Police Science, 43(1), 43–52. Glaser, D. (1956). Criminal Theories and Behavioral Images. American Journal of Sociology, 61, 433–444. Glaser, D. (1960). Differential association and criminological prediction. Social Problems, 8(1), 6–14. Matthews, Victor M. (1968). Differential identification: An empirical note. Social Problems, 15(3), 376–383. Short, J. F., Jr. (1960). Differential association as a hypothesis: Problems of empirical testing. Social Problems, 8(1), 14–25. Tittle, C. R., Burke, M. J., & Jackson, E. F. (1986). Modeling Sutherland’s theory of differential association: Toward an empirical clarification. Social Forces, 65(2), 405–432. Voss, H. L. (1964). Differential association and reported delinquent behavior: A replication. Social Problems, 12(1), 78–85. Differentiation The process by which a schema can be split into two new schemas. This happens when a schema attempts to assimilate a situation which requires considerable adjustment of the schema. The result is the generation of a new schema which is more appropriate to the new situation, while the old schema is refined (made more specific) so that it no longer assimilates the new situation. Cross-References ▶ Generalization Versus Discrimination Difficulty Level of Authentic Listening Input ▶ Effects of Task Comprehension Difficulty in Digit ▶ Learning Numerical Symbols Digital Learning Differential Association to Learning Skills ▶ Interactive Learning Environments ▶ Neural Network Assistants for Learning ▶ Differential Association Theory Digital Literacy ▶ General Literacy in a Digital World Differential Conditioning Classical or operant conditioning in which different stimuli are paired with different outcomes. A form of discrimination learning since subjects learn to respond differently to the different stimuli. Dilatory Behavior ▶ Procrastination and Learning Listening Directed Forgetting Dimension of Movement ▶ Impaired Learning Multidimensional Motor Sequence Direct Memory Test ▶ Recall and the Effect of Repetition on Recall Directed Forgetting COLIN M. MACLEOD Department of Psychology, University of Waterloo, Waterloo, ON, Canada Synonyms Instructions to forget; Intentional forgetting; Motivated forgetting; Voluntary forgetting Definition Directed forgetting is an experimental procedure developed in the late 1960s as an analog to the normal updating of memory. Essentially, individuals are told that they can forget some of the information being presented to them. This is done in one of two ways. In the item method, an instruction to remember or to forget is given immediately after each presented item. In the list method, a single instruction is given half way through the list of items either to forget or to continue remembering the first half of the list. Contrary to instruction, under both methods, memory for both to-be-remembered items (R items) and to-beforgotten items (F items) is ultimately assessed. The standard finding is poorer memory for the F items than for the R items – the directed forgetting effect. Theoretical Background In the beginning, four quite intuitive ideas were proposed to explain how people intentionally forget specified information. The most obvious was the erasure hypothesis – that we essentially delete information that we are told to forget (see the first directed forgetting D study by Muther 1965). Although this idea made some sense when applied to very small sets of information that could be held in or dropped from working (shortterm) memory, it did not make sense when applied to larger sets in long-term memory. Moreover, empirical work quickly showed that F items were not gone from memory. Consequently, the erasure hypothesis was quickly discarded. Muther also considered what he called the partitioning hypothesis, which soon became better known as the set differentiation or selective search hypothesis. Here, the idea was that, at the time of study, the items were sorted into two bins corresponding to the instructions (i.e., an F bin and an R bin). Then, at the time of test, when an individual tried to retrieve the studied items, priority was given to searching in the R bin, which favored remembering the R items. In the early days of research on intentional forgetting, this retrieval-based account became one of the two leading explanations for the directed forgetting effect. The third account that was put forth early on was the inhibition or repression hypothesis. Here, the core idea was that F items were suppressed at the time of study – essentially, their activation in memory was reduced. Then, at the time of test, when the individual was trying to remember, the F items were less likely to be retrieved. The inhibition hypothesis clearly is related to the selective search hypothesis in that both involve operations carried out at the time of study that are then influential at the time of test. Yet the inhibition view fell quickly into disfavor and disappeared from the theoretical landscape for quite some time, with selective search becoming the dominant retrieval-based account. The fourth account focused on encoding – what happened at the time of study – and grew largely out of experiments that used the item method. This was the selective rehearsal account (see Bjork 1972). The argument was that each item was held in abeyance in working memory until the instruction applied to that item was presented; as little processing as possible was undertaken on an item prior to instruction. Then, if the instruction was to remember, active rehearsal of that R item was undertaken; if the instruction was to forget, that F item received no further processing. The selective rehearsal account and the selective search account became the principal theoretical combatants in the early days. In part, this may have stemmed from these two views fitting both the working memory and 993 D 994 D Directed Forgetting the long-term memory studies then ongoing in investigations of intentional forgetting. This was the state of theoretical affairs from the late 1960s until the late 1980s, although throughout this period the selective rehearsal account usually was seen as the preferred explanation. This preference likely rested on three factors: (1) that rehearsal was a more established process in the broader memory literature, and one that seemed more observable, (2) that most of the studies used the item method, for which the rehearsal account was optimally suited, and (3) that emphasis had shifted primarily to studies involving long-term memory. But then, coincident with the rise in studies using the list method, the inhibition hypothesis made a comeback (see Bjork 1989), displacing the selective search hypothesis as the primary alternative to the selective rehearsal hypothesis. Most recently, a new account has emerged. Like the inhibition hypothesis, the contextual change hypothesis (Sahakyan and Kelley 2002) is particularly well matched to the experiments using the list method. The idea is that context plays a key role in the directed forgetting effect as it is known to do in many memory phenomena – indeed, memory can be seen in general as exquisitely contextual. Under this view, a context is in place as the first half of the list of items is presented. Then an instruction to forget those items is delivered, which disrupts the initial context such that a new context is established. The remainder of the list is then presented and studied under this new context. So the initial F items are linked to one context and the subsequent R items are linked to another context. Because there is no context disruption as the individual segues from study to test, the R items have the advantage of a context match between study and test, whereas the F items have the disadvantage of a context mismatch. This discrepancy, it is argued, produces the directed forgetting effect. Important Scientific Research and Open Questions From the beginning of research on directed forgetting, there had been a key empirical mystery (see MacLeod 1998, for a review). This pertained to the type of test used to assess the Remember–Forget manipulation. Researchers had long reported that a recall test – where the individual must try to recover the items from memory unaided – always revealed a directed forgetting effect. But this regularity was absent on a recognition test – where the individual is shown some items that were studied and some that were not and is asked to indicate for each item whether it was in fact studied. On recognition tests, the directed forgetting effect was sometimes present and sometimes absent. Finally, in the late 1980s/early 1990s, a solution to the mystery was offered: Recognition showed a quite consistent directed forgetting effect when the item method was used, but the effect rarely appeared when the list method was used. This empirical resolution led to the proposal that the two methods were perhaps not just minor variations on each other. Basden et al. (1993) suggested that selective rehearsal was responsible for the directed forgetting effect under the item method but that inhibition was responsible for the effect under the list method. Selective rehearsal was seen as eminently possible when instructions were provided on an item-byitem basis, but as not possible when a quite large set of items preceded a single forget cue, as in the list method. The idea was that the long list prior to the instruction in the list method would result in F items already having been rehearsed, making it too late for selective rehearsal to operate successfully. Therefore, another factor would have to come into play to produce list method directed forgetting: The prime candidate was inhibition. It is important to note that the directed forgetting effect is not due to demand characteristics, where the argument might be made that people simply do not try to recall the F items, essentially doing what they think the experimenter wants them to do. In the item method, even when the instructions themselves cannot be remembered, there is still a directed forgetting effect. And enticing people to try to recall more F items, such as by offering incentives (e.g., money for F items but not for R items), does not diminish the effect. It would appear that memory is indeed not as good for F items as it is for R items. The directed forgetting effect appears to be limited to explicit memory tests such as recall and recognition, where remembering is done consciously. On implicit memory tests – such as completing partial words that were or were not studied, or reading aloud as quickly as possible words that were or were not studied – the evidence is quite consistent that there is no difference between F items and R items. As well, the effect disappears if the F items are meaningfully linked to the R items. Such Discontinuities for Mental Models results implicate conscious encoding and retrieval processes as the locus of the directed forgetting effect. There is now reasonable consensus that selective rehearsal during study underlies the directed forgetting effect when the item method is used (see MacLeod 1998). Debate centers on the best explanation for the effect when the list method is used. Are the F items inhibited or do they suffer from a contextual mismatch between study and test? Both mechanisms, instituted during the study phase, would make later retrieval of F items less successful than that of R items. (Indeed, it is even conceivable that a modified rehearsal account could explain list method directed forgetting: Given the smaller effect under the list method, it might be that there is selective rehearsal of only some of the items, such as just those few immediately preceding the mid-list instruction to forget.) In the end, “directed forgetting” may well be a misnomer. For the item method, it would appear that people simply do not learn the F items as well as the R items: Obeying instructions, they give F items less attention and less rehearsal, resulting in weaker learning. As for the list method, although its explanation is more debatable, both the context change account and the inhibition account rely at least in part on actions taking place during encoding that therefore influence learning. To the extent, then, that directed forgetting is an encoding effect under both methods, it might better be labeled “directed learning,” where the instructions result in more (R items) or less (F items) learning of the material being studied. D Bjork, R. A. (1989). Retrieval inhibition as an adaptive mechanism in human memory. In H. L. Roediger III & F. I. M. Craik (Eds.), Varieties of memory and consciousness: Essays in honour of Endel Tulving (pp. 309–330). Hillsdale: Lawrence Erlbaum. MacLeod, C. M. (1998). Directed forgetting. In J. M. Golding & C. M. MacLeod (Eds.), Intentional forgetting: Interdisciplinary approaches (pp. 1–57). Mahwah: Lawrence Erlbaum. Muther, W. S. (1965). Erasure or partitioning in short-term memory. Psychonomic Science, 3, 429–430. Sahakyan, L., & Kelley, C. M. (2002). A contextual change account of the directed forgetting effect. Journal of Experimental Psychology. Learning, Memory, and Cognition, 28, 1064–1072. Direction ▶ DICK Continuum in Organizational Learning Framework Disciplinary Identity ▶ Identity and Learning Disciplinary Literacy ▶ Content-Area Learning Cross-References ▶ Episodic Learning ▶ Inhibition and Learning ▶ Intentional Learning ▶ Memory Consolidation and Reconsolitation ▶ Retention and Learning ▶ Selective Attention in Learning ▶ Verbal Learning References Basden, B. H., Basden, D. R., & Gargano, G. J. (1993). Directed forgetting in implicit and explicit memory tests: A comparison of methods. Journal of Experimental Psychology. Learning, Memory, and Cognition, 19, 603–616. Bjork, R. A. (1972). Theoretical implications of directed forgetting. In A. W. Martin & E. Melton (Eds.), Coding processes in human memory (pp. 217–235). Washington, DC: Winston. Discontinuities for Mental Models SUSANNE PREDIGER IEEM – Institute for Development and Research in Mathematics Education, TU Dortmund University, Dortmund, Germany Synonyms Cognitive dissonances; Incoherences as driving forces for concept development 995 D 996 D Discontinuities for Mental Models Definition Along with other types of cognitive structures (such as schemes), a mental model can be thought of as a specific representation in the human mind of an individual’s experience or insight. One substantial challenge and catalyst for constructing adequate mental models are discontinuities of conceptual models. A conceptual model (i.e., the target model being the prescriptive counterpart of a mental model) is defined to have a discontinuity, when an extension of its scope of application necessarily implies changes in properties or meanings. These discontinuities in the subject matter structure can offer obstacles and opportunities for students’ development of conceptual understanding by altering their mental models. Theoretical Background Constructing adequate mental models and schemata is a major goal for subject-matter learning processes in many domains, for example science education or mathematics education. Individuals construct new mental models in situations of accommodation, when an experience cannot be assimilated to existing schemata (see entry ▶ Mental Model or Schema Construction). Mental models and schemata in the individual cognitions of learners are said to be adequate if they are congruent with corresponding conceptual models in the subject-matter domain. This definition uses a distinction between mental models (in a descriptive mode as those being cognitively constructed by individuals) and conceptual models (in a prescriptive mode as those intended to be constructed from an instructional perspective) (Seel 2003). One substantial catalyst and challenge for constructing adequate mental models (among others) are discontinuities of conceptual models. This theoretical construct offers a subject-related background for locating and explaining conceptual challenges that learners encounter in longer-term subject-matter learning processes, as they are often described in conceptual change approaches (Prediger 2008). Typical discontinuities appear, for example in mathematics, when extending number domains from natural to rational numbers. For example, a fraction does not have a unique neighbor on the number line whereas every natural number has. In the transition from natural to rational numbers, students have to restructure their order schema for numbers and have to build a mental model on the density property of fractions and decimals and realize the limit of scope of their former mental model: The idea of neighborhood must be constrained to natural numbers (Vosniadou and Verschaffel 2004). In order to explain the sources of difficulties, researchers in domain-specific education research areas focus on epistemologically specific mental models, namely, those that concern the meaning of domain-specific phenomena or concepts (e.g., the interpretation of a mathematical concept or biological phenomena in real-world situations). They are important because the mental construction of adequate meanings is a precondition for dealing with domainspecific concepts adequately, flexibly, and with conceptual understanding (see entry ▶ Mathematical Learning). In mathematics education research, conceptual models concerning meaning have been conceptualized as “Grundvorstellungen” (vom Hofe 1998) or have simply been termed as “models,” defined in the epistemologically restricted way as “meaningful interpretation of a phenomenon or concept” (Fischbein 1989, p. 12). Important Scientific Research and Open Questions The impact of discontinuous conceptual models on students’ thinking can be illustrated by an often documented prototypical example, namely, the case of multiplication of numbers. For natural numbers, multiplication always makes bigger, but when multiplication is extended to rational numbers, the property is not satisfied anymore. This discontinuity may cause difficulties for students when they mathematize realworld situations. For example, when calculating the cost of 0.7 kg potatoes at £1.50 per kg, many students chose the operation division, because for them, multiplication appears to make the 1.50 bigger, but the cost is supposed to be less (see Bell et al. 1981). In this case, the wrong order property is used as a non-adequate schema that guides the (non-adequate) operation choice. Researchers have suggested to confront the students’ wrong schema with new examples and to initiate processes of accommodation, but in many cases, the construction of new mental models seems to be not stable. Other studies have shown that it is crucial to consider the discontinuity not only on the level of order properties alone but on the level of meaning (Prediger 2008): It is the most important Discourse conceptual model on the meaning of multiplication, the repeated addition model, that does not apply for rational numbers between 0 and 1. An adequate mental model for multiplication making bigger or not should include a model for the meaning of multiplication, like the interpretation of multiplying as scaling up (and down, for numbers smaller than 1). Students can construct these alternative interpretations of multiplication in suitable learning environments. The example shows that some cognitive difficulties in constructing adequate mental models can be traced back to discontinuities and continuities of meanings as crucial parts of the inherent structure of the subject matter. On the other hand, for investigating processes of mental model construction, discontinuities of conceptual models offer instructive subjects of research. Whereas some discontinuities of mathematical properties and propositions have been subject to research in the conceptual change tradition (see Vosniadou and Verschaffel 2004), discontinuities of meanings have only rarely been the subject of empirical research (one example is the case of the meaning of the equal sign in the transition from arithmetic to algebra). In science education, the conceptual change approach was more consequently applied to discontinuities between everyday and scientific conceptions, whereas changes within the physicists’ or chemists’ theory buildings have – so far – gained less attention. That is why many important research questions in all these areas are still open, for example: How do individual constructions of mental models proceed on the microlevel when students encounter discontinuities? Which conditions in a learning environment and which instructional strategies do facilitate or hinder successful constructions? How do constructed models interact with each other and which mental resources do students go back to when constructing new mental models? These questions can be answered on a general level, but it is sure that the answers will vary in different subject domains. It is therefore an important task for domain-specific learning sciences to research for different scientific subjects. Cross-References ▶ Conceptual Change ▶ Mathematical Learning ▶ Mental Model D 997 References Bell, A., Swan, M., & Taylor, G. M. (1981). Choice of operation in verbal problems with decimal numbers. Educational Studies in Mathematics, 12, 399–420. Fischbein, E. (1989). Tacit models and mathematical reasoning. For the Learning of Mathematics, 9(2), 9–14. Prediger, S. (2008). The relevance of didactic categories for analysing obstacles in conceptual change: Revisiting the case of multiplication of fractions. Learning and Instruction, 18(1), 3–17. Seel, N. M. (2003). Model-centered learning and instruction. Journal of Technology, Instruction, Cognition and Learning, 1(1), 59–85. vom Hofe, R. (1998). On the generation of basic ideas and individual images: Normative, descriptive and constructive aspects. In A. Sierpinska & J. Kilpatrick (Eds.), Mathematics education as a research domain: A search for identity. An ICMI study (pp. 317– 331). Dordrecht: Kluwer. Vosniadou, S., & Verschaffel, L. (2004). The conceptual change approach to mathematics learning and teaching. Learning and Instruction, 14(5), 445–548. Discourse ANNA SFARD Department of Mathematics Education, University of Haifa, Haifa, Israel Synonyms Communication (type of); Conversation; Talk Definition The English word discourse comes from the Latin discursus, a derivative of the verb discurrere, to run about. Initially, the term meant the act, or faculty, of producing a logical argument in an orderly fashion, and was therefore almost equivalent to the English word reasoning. In modern everyday English, discourse is synonymous with verbal interaction or simply conversation. In modern linguistics, it signifies the unit of analysis that goes beyond the sentence. In social sciences, the use of the word is polysemic. In some contexts, it signifies a segment of connected speech or written text produced in the course of verbal interaction. In other instances, it refers to the activity of communicating rather than to its products. In still other contexts, it is a higher-order term that designates a category of specific instances of communicational acts and products, unified by some common features, such D 998 D Discourse as specialized vocabulary, distinct patterns of communicational actions, the type of statements produced by participants, etc. In this case, therefore, the word discourse is understood as referring to the type of communication rather than any specific instance of communicating. One can speak, for example, about mathematical discourse, a discourse of physics, of history, or of fine arts, as well as about discourses of different political factions, social classes, and professional or ethnic groups. Although originally associated mainly with spoken interactions, the word discourse is increasingly used in the broader context of communication at large, not necessarily verbal, vocal, or synchronous. Theoretical Background In the recent decades, there was a significant increase in the amount of explicit attention paid by social scientists to human communication. The change is not purely quantitative. Rather, the reorientation toward discourse indicates a revolutionary shift not only in research methods, but also in the epistemological underpinnings of the contemporary social thought. It is now widely agreed among social scientists that understanding human communication, its inner working, and its interaction with other human activities is crucial to our grasp of all uniquely human phenomena. In the times of unconstrained connectivity, when people spend unprecedented proportion of their waking time interacting with others in unprecedented multiplicity of ways, the wide consensus about centrality of communication is not surprising. The origins of the discursive turn in social sciences, however, go back farther than the latest technological advances, to a number of interrelated developments in the contemporary scientific thought. Probably the most significant among these developments was the postmodern rejection of the notion of absolute truth, the attainment of which was the declared aim of positivistic science. Rather than seeing the products of scientist’s work as originating in the nature itself and arising in the direct interaction between the researcher’s mind and reality, postmodern thinkers began to speak about knowledge construction as a “conversation of mankind” (Rorty 1979, p. 389), an interpersonal process of telling stories about the world, accompanied by the constant effort to refine the forms of communication that made these stories possible. Many influences came together in the metaphor of knowledge-as-conversation, among them Ludwig Wittgenstein’s (1889–1951) criticism of the positivist science, grounded in his observations on how our “language games” would often lead us astray; Alfred Schutz’s (1899–1959) emphasis on the study of social interactions as a basis for understanding all human experience; Thomas Kuhn’s (1922–1996) vision of science as a succession of paradigms, none of which can aspire to the title of “ultimate one”; and Michael Foucault’s (1926–1984) work on discourse as the principal arena of all those phenomena that give meaning to the term social. The ontological and epistemological upheaval caused by all these developments had numerous entailments. For many social researchers, the revolutionized vision of science brought a new message about the status and methods of their study. If the product of the researcher’s investigations are narratives about the world, then by virtue of their having human authors and addressees, these narratives can make no claims to “full” objectivity. As cogent as they may appear, they will always remain contestable and subject to revision. Moreover, since the protagonists of the researcher’s stories are themselves active storytellers, the researcher needs to ask about the status of his or her own narratives versus those offered by the study participants. Thus, the social scientist who takes discourses as the object of his or her research does not ask whether informants’ narratives are “objectively true.” Rather, the investigator inquires about what makes people say what they say, how those who speak convince others to endorse their stories, and how the things said affect interlocutors’ lives. The search after answers is supported by recent developments in linguistics, where the focus has shifted from the language as a tool, with the single sentence serving as an object of study, to the study of language in use, with entire discursive episodes (sequences of utterances) constituting units of analysis. This transformation is particularly visible in the relatively young but burgeoning domains of study such as ▶ systemic functional linguistics, ▶ discourse analysis, and ▶ conversation analysis. As a result of this foundational shift in social studies, discourse began occupying center scene also in the sciences of learning. In spite, however, of the wide consensus among researchers about discourses as the primary medium for studying human cognitive growth, the central question of the relation between Discourse communicating and learning, and even more fundamentally, between communication and thinking, is rarely addressed by researchers in a direct manner. At a closer look, a variety of answers seem to be underlying diverse research efforts. These answers are spanning a wide range of possibilities, delineated by two extreme doctrines. On one end of the spectrum, there is the conviction that thought and communication, although interrelated and often concomitant, are distinct types of human activity, with discourse playing the secondary role of the “carrier” of one’s thoughts. The other extreme is marked by Wittgenstein’s denial of the primacy of thought over speech and by his rejection of the idea of “pure thought” that would preserve its identity through a variety of verbal and nonverbal expressions. This radical position is in concert with the work of Lev Vygotsky (1896–1934), who illustrated the inseparability of thought (or meaning) and speech by saying that conducting study of thought independently of the study of words is comparable to investigating properties of water by focusing separately on hydrogen and on oxygen. In spite of the fact that the rejection of the thinking– communicating dichotomy has been heralded by some writers as the beginning of the “second cognitive revolution” (Harré and Gillett 1995), the moderate discursive approaches did not disappear from the current sciences of learning and they seem to coexist peacefully with the more radical ones. Treating discourse as a “window” to the contents of human mind is in tune with the foundational tenets of the traditional cognitive psychology and with its sustained Cartesian belief in the ontological uniqueness of mental phenomena. By equating thinking with communicating and thus disposing of the Cartesian split between body and mind, adherents of the radical position render the sciences of learning clear sociocultural slant: they recognize the fact that even the most private of human activities, such as thinking or feeling, can be understood only if conceived as a part of wider collective activities. Here, learning mathematics or physics becomes one’s attempt to become a participant of the historically developed forms of discourse known as mathematics or physics. As made clear by the word participation, and as famously stated by Vygotsky, learning originates on the “social plane” rather than directly in the world. The diversity of the foundational positions notwithstanding, there seems to be a general consensus that the D 999 active engagement in conversation with others is a necessary condition for learning. Important Scientific Research and Open Questions Almost any question about learning can be recast as a question about discourse. According to the perspective adopted and aspects considered, one can probably sort all the existing discourse-oriented studies of learning into three thematic strands. The first two of these distinct lines of research are concerned with different features of the discourse under investigation and can thus be called intra-discursive or inward looking. The third one deals with the question of what happens between discourses or, more precisely, how interdiscursive relations impact learning. The first intra-discursively oriented strand of research on learning focuses on learning–teaching interactions, whereas its main interest is in the impact of these interactions on the course and outcomes of learning. Seminal events that initiated this type of research include, among others, the studies that brought to the fore the ubiquity of ▶ Initiation-Response-Evaluation (IRE) sequence in a traditional classrooms (see, e.g., Mehan 1979) and those that focused on teacher’s discursive routines, such as ▶ revoicing or scaffolding. Today, when inquiry learning, collaborative learning, computer-supported collaborative learning, and other conversation-intensive pedagogies become increasingly popular, one of the main questions asked by researchers is that of what features of small group and whole-class interactions make these interactions conducive to highquality learning. Whereas there is no doubt about theoretical and practical importance of this strand of research, some critics warn against the tendency of this kind of studies for being too generic, which is what happens when findings regarding patterns of learning–teaching interactions are presented as if they were independent of the topic learned in classrooms. The second intra-discursively oriented line of research on learning inquires about development of discourses. For those who equate thinking with communicating, asking about the development of, say, one’s mathematical discourse is tantamount to asking about this person’s learning of mathematics. This time, the focus is on the uniquely mathematical features of the discourse and on these features’ gradual evolution. Comparable in its aims to research conducted within the tradition of D 1000 D Discourse conceptual change, this relatively new type of study on learning is made distinct by its foundational assumption on the unity of thinking and communicating and its use of methods of discourse analysis. It owes its growing popularity, among others, to the pioneering contributions by Jay Lemke (1993), whose work on learning science was supported by analytic techniques of systemic functional linguistics. One of the main tasks yet to be undertaken is to develop subject-specific method of discourse analysis, tailored according to the distinct needs of the discourse under study. Finally, the inter-discoursively oriented studies inquire about interactions between discourses and their impact on learning. This type of research is grounded in the recognition of the fact that one’s access to a particular discourse, say mathematics or science, may be supported or hindered by other discourses. Of particular significance among these learning-shaping forms of communication are those that carry specific cultural norms and values or distinct ideological messages. One of the earliest, but still influential exemplars of this kind of research is the work of Basil Bernstein (1971). Studies belonging to this tradition are often concerned with issues of power, oppression, equity, social justice, and race, whereas the majority of researchers whom this research brings together do not hesitate to openly admit their ideological commitment. The notion of identity is often used here as the conceptual device with which to describe the way cultural, political, and historical narratives impinge upon individual learning (see, e.g., Gee 2001). Methods of ▶ critical discourse analysis are particularly useful in this kind of study. As different as these three lines of research on learning may be in terms of their focus and goals, their methods have some important features in common. In all three cases, the basic type of data is the carefully transcribed communicational event. A number of widely shared principles guide the processes of collection, documentation, and analysis of such data. Above all, researchers need to keep in mind that different people may be using the same linguistic means differently, and that in order to be able to interpret other person’s communicational actions, the analysts have to alternate between being insiders and outsiders to their own discourse: they must sometimes look “through” the word to what they usually mean by it, and they also must be able to ignore the word’s familiar use, trying to consider alternative interpretations. For the same reason, events under study have to be recorded and documented in their entirety, with transcriptions being as accurate and complete records of participants’ verbal and nonverbal actions as possible. Finally, to be able to generalize their findings in a cogent way, researchers should try to support qualitative discourse analysis with quantitative data regarding relative frequencies of different discursive phenomena. Because of these and similar requirements, the discourse-oriented research is much more demanding and time-consuming than many other types of studies. If a researcher is still ready to engage in this kind of investigations, it is because of their unique payoffs. True, the task of discourse analyst is not much different from that of any person trying to take part in verbal or nonverbal exchanges: in both cases, one has to make sense of other people’s discursive actions. And yet, the admittedly demanding methods of discourse analysis, when at their best, allow the analyst to see what inevitably escapes one’s attention in real-time conversations. The resulting picture of learning is characterized by high-resolution: one can now see as different things or situations that, so far, seemed to be identical; and is able to perceive logic in discursive actions that in real-time exchange appeared as nonsensical. Moreover, discourse-oriented research brings a promise of a unifying framework, free from dichotomous divides such as thinking versus communicating, form versus content, cognition versus affect, or individual versus social. In this framework, all aspects of learning would be seen as members of a single ontological category, to be studied with one integrated system of tools. The task of constructing such framework is likely to preoccupy discourse-oriented investigators of learning in years to come. Cross-References ▶ Collaborative Learning ▶ Computer Supported Collaborative Learning ▶ Discourse and the Production of Knowledge ▶ Discourse Processes and Learning ▶ Inquiry Learning References Bernstein, B. (1971). Class, codes, and control. New York: Schocken Books. Gee, J. P. (2001). Identity as an analytic lens for research in education. Review of Research in Education, 25, 99–125. Discourse and the Production of Knowledge Harré, R., & Gillett, G. (1995). The discursive mind. Thousand Oaks: Sage. Lemke, J. L. (1993). Talking science: Language, learning, and values. Norwood: Ablex. Mehan, H. (1979). Learning lessons: Social organization in the classroom. Cambridge, MA: Harvard University Press. Rorty, R. (1979). Philosophy and the mirror of nature. Princeton: Princeton University Press. Discourse Analysis relationship between these two central notions of the humanities and social sciences, despite the fact that we acquire most knowledge by text and talk, and that in order to produce and understand discourse language users need vast amounts of knowledge. This article summarizes some of the current theoretical and empirical studies that have contributed to this insight, especially in contemporary Discourse Studies and Cognitive Science. Discourse Studies Also known as DA or discourse studies, is a generic name for different approaches to analyzing written, spoken or signed communication. DA focuses on speech units larger than the sentence and takes into account the contexts in which discourse occurs. Discourse and the Production of Knowledge TEUN A. VAN DIJK Department of Translation and Language Sciences, Pompeu Fabra University, Barcelona, Spain Synonyms Beliefs; Communication; Discourse; processing; Knowledge; Talk; Text D Discourse Definitions Social knowledge is here defined as the shared, justified beliefs held by the members of an (epistemic) community. Discourse is variously defined as a communicative event, a form of interaction and as a situated unit of language use. Theoretical Background Introduction Both on discourse and on knowledge there is a vast amount of research since classical rhetoric and epistemology. Yet, there is as yet not a single monograph that explores the obvious insight of the fundamental Since the 1960s, the cross-discipline of Discourse Studies has vastly extended our understanding of text and talk in all disciplines of the humanities and social sciences, and beyond the psycholinguistics as well as the traditional, structural, and generative grammars of isolated sentences. Discourse today is analyzed as a complex, multimodal object, as a form of social interaction and as a communicative event in its sociocultural context, managed by socially shared underlying cognitive strategies and representations – some of which are to be dealt with in this article (Schiffrin et al. 2001; Van Dijk 2011). The Theory of Knowledge Classical as well as much of modern epistemology fundamentally defines (declarative) knowledge as justified true beliefs, with many variations as to the nature and conditions of justification (among a vast numbers of books in epistemology, see, e.g., Bernecker and Dretske 2000). In this article, our approach to the theory of knowledge will be more natural and pragmatic, namely as a multidisciplinary account of the cognitive, social, and cultural properties and functions of the shared beliefs of an (epistemic) community, justified by the variable (epistemic) standards or criteria of that community. This approach implies that knowledge is both contextual and relative: What is assumed to be knowledge now by the members of an epistemic community maybe seen as mere or false belief, or as superstition or prejudice, by members of another community, or by those of the same community later. As a practical test, we assume that beliefs count as knowledge of a community if they are presupposed and taken for granted in the social practices, and hence in the public discourse, of the community. We here find a first and fundamental relationship between discourse and knowledge. 1001 D 1002 D Discourse and the Production of Knowledge The psychological study of knowledge, since the cognitive revolution of the 1960s and 1970s, analyzed knowledge as organized networks of concepts and categories in semantic memory, and as part of Long-Term Memory, for instance, in terms of schemas, scripts, and prototypes (for review, see Wilkes 1997). It did so largely in isolation from the obvious social psychological insight that most knowledge is not acquired and used by isolated individuals, but shared by, or distributed over the minds of the members of a community. Under the influence of the emerging neurosciences in the 1990s, psychology today is developing new insights into knowledge defined as an embodied, multimodal system “grounded” in various brain regions, such as those processing vision, movement, and emotion, involved in the acquisition and uses of knowledge in the experiences of everyday life (Barsalou 2008, among many other papers). Discourse Processing It is within this broad, multidisciplinary framework that we need to account for the cognitive production and comprehension of discourse, and for the role of knowledge both as a condition as well as a consequence of these processes (for reviews and introductions on discourse processing, see, e.g., Graesser et al. 1997, 2003; Kintsch 1998; McNamara and Magliano 2009; Van Dijk and Kintsch 1983). Discourse Production and Knowledge Management Given the multimodal and multilevel nature of discourse, the production of text or talk is a situated social practice organized by semiotic (phonological, visual, etc.), syntactic, semantic, pragmatic, and interactional structures based on various kinds of mental representations and organized by cognitive strategies that make sure that the discourse is understandable, well-formed, meaningful, appropriate, and efficient in its communicative situation (despite the vast literature on discourse processing, there are hardly specialized monographs focusing on the production of discourse). At all these levels of discourse production, first of all, socially shared knowledge of the language, consisting of the lexicon, the grammar, as well as the rules of discourse, interaction and context, obviously plays a central role. At the same time, language users need to activate and apply their knowledge of the world, that is, their general, socially shared knowledge about the objects, people, actions, events or situations talked or written about (for references, see below). Given the shared nature of social knowledge of the world, as Common Ground (Clark 1996), language users need not express all information in discourse they may assume can be inferred by the recipients from the knowledge they have in common with the speaker or writer. In other words, discourse is essentially incomplete, because many of the propositions that define its local and global meaning and coherence are left implicit in the process of production. Despite the vast amount of knowledge language users of the same community have in common, there are obviously personal and social differences as to the knowledgeability or the expertise of individual language users. Hence, speakers and authors need to contextually adapt this knowledge management during discourse production to their assumptions about the knowledge of the recipients, or the lack of knowledge of new members of the epistemic community (children, students, foreigners, etc.), as is also the case in the popularization of science. For didactic, persuasive, or emotional reasons, speakers may of course repeat some information they know recipients might or should already have. And conversely, recipients may be manipulated or otherwise abused if the speaker presupposes knowledge they do not have – but still is taken for granted indirectly, even when in fact the beliefs are false. Further dependent on many contextually variable strategies and constraints, the general pragmaticepistemic rule of discourse production is that speakers or writers assert propositions they assume recipients do not yet know and cannot infer themselves from their own knowledge. This is at the same time the basic condition of (new) knowledge production as well as of knowledge distribution and reproduction in the community. Context Models Language users are only able to epistemically adapt their text or talk to the recipients if they know what the recipients know. Such assumptions are part of their subjective representation of recipients and other relevant aspects of the communicative situation, called their context model, stored in episodic memory, part of Long-Term Memory (Van Dijk 2008, 2009). Discourse and the Production of Knowledge A dynamically changing context model controls the many variable aspects of discourse that make sure the discourse (fragment) is communicatively appropriate, such as its genre, style, register, and topics. Such a context model consists of a relatively simple schema with categories such as Setting (Place, Time), current Social Action, as well as the Participants (and their current social identities, roles, and relations, and their current cognitive properties, such as their goals and knowledge, as well as ideologies if they speak as group members). Current multimodal knowledge theories suggest that these context models, defined as models of communicative experience just like any other experience, may well have a multimodal nature, featuring auditory aspects of speech (such as a special tone, stress, or intonation) or the environment (e.g., noise), visual information about participants and the setting, body movements of interaction (gestures, position), as well as opinions and emotions about the participants, the topics of discourse, or the whole speech event (Barsalou 2008). Central in the context model is a knowledge device that dynamically and ongoingly hypothesizes what recipients already know or may infer from their knowledge, so that the speaker can strategically adapt the discourse to this assumed knowledge of the recipients, by being more or less explicit or implicit, and manage what information must be asserted and what information may be presupposed, both locally within or between sentences, as well as globally as in the discourse as a whole. One powerful strategy is that if recipients are members of the same knowledge community, recipients are assumed to have the same general knowledge as the speaker or author, except new knowledge the speaker or author has recently acquired by reliable observation, sources (speakers, media), or inference. Discourse Comprehension Discourse comprehension has many properties in common with discourse production, and is a process that is based on (more or less) the same knowledge of the language and the world as they are used and applied by speakers and writers (Kintsch 1998; Britton and Graesser 1996). The obvious difference is that speakers and writers in principle know what they mean and want to convey, and need to find an appropriate discursive expression to these meanings, whereas D recipients start with this discursive expression and need to figure out what the speaker or writer means. Recipients have their own context model of the communicative situation, with their own information and opinions about the setting, the participants (and their identities, roles, relations, goals, knowledge, etc.) and the ongoing social action. Discrepancies with the context model of the speaker or writer, for instance, about the goal of the communicative interaction, may thus lead to communicative conflicts. Especially relevant for the topic of this article is the role of knowledge in the construction of the meaning of the discourse (see Kintsch 1998; Van Dijk and Kintsch 1983). Since speakers or writers assume recipients are able to infer much information from their (shared) social knowledge, this is precisely what recipients (have to) do: Together with the information derived from what is explicitly expressed in discourse, they ongoingly must generate at least those inferences from their knowledge that are needed to produce a meaningful and coherent semantic interpretation of the discourse (Graesser and Bower 1990). Typically, they may thus generate plausible causes or consequences of events or reasons of action, or fill in many details of socioculturally well-known episodes, such as going to work or to school, shopping, eating in restaurants, birthday parties, or demonstrations, among many others. Obviously, the nature and amount of these inferences crucially depend on the abilities (literacy, etc.), knowledge, goals, or tasks of the recipients (for details, see, e.g., Graesser and Bower 1990). Situation Models This knowledge-based process of discourse comprehension appears to go far beyond the mere interpretation of words, clauses, or sentences and even beyond the construction of locally and globally coherent discourse meanings. Indeed, the goal of discourse comprehension is not merely to understand the discourse itself, but rather what the discourse is about: what it tells us about some event or situation or the world. It is therefore assumed that besides construing a semantic representation of the discourse (its intension), language users also construe a subjective, multimodal mental model of these events, situations, or episodes, referred to or spoken about (its extension). In other words, to understand discourse means to be able to construe a mental model for it. This model may feature the 1003 D 1004 D Discourse and the Production of Knowledge visual, auditory, sensorimotor, emotional, and other modal aspects that are associated with the way recipients imagine or simulate the event talked or written about (instead of mental models, Barsalou (2008) speaks of simulations to refer to situated comprehension and experiences). As is the case for (pragmatic) context models and other models of personal experience, these (semantic) models of events or situations are also stored in Episodic Memory (for details on mental models, see Johnson-Laird 1983; Van Dijk and Kintsch 1983: Van Oostendorp and Goldman 1999). As suggested, general, sociocultural knowledge plays a central role in the construction of this mental model, together with the (new) information of the discourse, and possibly with information derived from old mental models (previous experiences, previous discourses), for instance, by supplying missing inferences about conditions, consequences, participants, details, and other plausible elements of the situation. Note that the way knowledge is (partly) activated and applied in the construction or updating of such models of the event or situation referred to is controlled by information in the pragmatic context model. In other words, different recipients may interpret the same discourse in a different way by constructing different (semantic) situation models. And conversely, for the same contextual reasons, different readers may also acquire different (new) knowledge from the same discourse, depending on their previous knowledge, interest, motivation, and current goals. Knowledge Production Crucial at this point is not only that shared knowledge is strategically (partly) activated to construe semantic representations and mental models, but also may be transformed (formed, changed) by discourse. Indeed, information that is not implied or presupposed by text or talk may be used to build and socially distribute mental models about unknown events, as is the case in everyday personal storytelling as well as in news reports. When repeated, such discourses and their mental models may be generalized and abstracted from so as to form more general knowledge about this type of event. For instance, news about specific terrorist attacks may be used to build knowledge and attitudes about terrorism. This is a special (discourse) way of learning from personal experience, and a condition for the social reproduction of knowledge as well as other forms of social cognition (attitudes, ideologies, norms, values) in society. Obviously, besides model-based (i.e., experiencebased) acquisition of knowledge, new knowledge may also be produced more directly, as is the case in many forms of pedagogical or expository discourse (Britton and Black 1985), for instance, by generic descriptions of events, objects, or phenomena; by definitions of terms; the use of metaphors; schemas; etc. As is the case for all discourse, such discourse presupposes the general, shared knowledge of the community, but strategically expands this knowledge by various multimodal strategies of knowledge transformation. These may include information about (a) categorical relationships (such as higher-level categories or lowerlevel subcategories), (b) visual or other perceptual appearances, (c) parts or components, (d) relationships with other objects or phenomena, (e) functions or uses, and so on. Important Scientific Research Most of the theoretical issues of discourse processing and the role of knowledge discussed above have been shown to be empirically warranted by (mostly experimental) research. Thus, many studies have shown that discourse comprehension crucially depends on the activation and application of what is usually called “prior knowledge” – although such knowledge is not usually precisely defined (McNamara and Kintsch 1996; see also below). Thus, perhaps trivially, people who know more about a domain or topic, usually better understand a discourse about such a topic or domain – if only because they are able to derive more inferences and hence are able to construe more detailed mental models of specific events or new schemas of new, generic knowledge. But, as is generally the case, both outside as well as within in the laboratory, actual knowledge acquisition depends on the structures and strategies of text and context. For instance, because of their larger knowledge, experts may pay less attention to the specific details of text or talk and hence may hardly do better than nonexperts in specific tasks, such as recall or recognition. Similarly, if texts are very explicit they may be less interesting for experts, and hence they may pay less attention and again recall fewer details than Discourse and the Production of Knowledge nonexperts. And in all cases, it depends on the tasks and hence the goals of the participants: Someone who must correct the style or translate a news report may well learn less about some news event than a reader or a political activist who is specifically motivated and interested in news about a specific topic or domain. Among the vast number of studies on the role of knowledge in discourse comprehension and hence on learning from text, here is a summary of some of the findings in addition to those that have been mentioned above (for detail, and further references for each result, see especially the chapters in Graesser et al. 2003). Context Variables ● People in general learn more from text when they ● ● ● ● have more prior knowledge about the domain or topic of the text (among many studies, see, also Kendeou and Van den Broek 2007). People in general have a memory bias for the information with which they agree. However, people with more knowledge about an issue are able to better reproduce two sides of a controversial argument. Experts versus nonexperts (high- or low-knowledge subjects) learn differently from texts. People learn more from text when they do so interactively, e.g., by discussion about the text. More generally, people learn more when they explicitly (must) think about the way they learn from the text (metacognition). Text Variables ● More cohesive, more coherent, more explicit, and better organized text (e.g., with summaries, headers, conclusions) generally favor comprehension and hence knowledge acquisition. ● Inaccurate prior knowledge needs to be explicitly rejected – it is less efficient to simply present correct knowledge. ● Images may help understanding and learning from text. Combined Text and Context Variables ● In general, people learn more from cohesive, more coherent, and well-organized text, especially if they are less-skilled readers, but the interaction between text structure, prior knowledge, and reading ability is more complex than that. D Unfortunately, most experimental work in the laboratory is focused more on “learning from text” in the narrow sense of what (new) information can be recalled, recognized, reproduced, or applied in specific ad hoc laboratory tasks (see also Kintsch 1991, 1998). Socially shared knowledge, however, should be defined in broader terms, and at least involve relatively long-term or even permanent transformation of our socioculturally shared knowledge as members of epistemic communities. Outside educational situations (classrooms, exams, etc.), few controlled experiments offer insight into these long-term constructions and transformations of our knowledge. Most likely, such new, socioculturally shared, knowledge is acquired and integrated within the knowledge system if it is repeatedly situationally relevant, namely if it is often presupposed for the understanding of public discourse (as is the case for our generally knowledge about computers, the Internet, and DNA, for instance) and if it is taken for granted in other social practices. Most experimental studies on the role of knowledge in discourse production and comprehension, or on the acquisition of (new) knowledge from discourse, barely reflect on the nature, the structure, and the organization of knowledge in memory, and how such knowledge is changed. In order to examine how exactly people acquire knowledge from discourse, we need to know much more about how the structures of discourse are related to the structures of knowledge, as well as about the many context variables that affect this relationship in actual learning and the use and reproduction of knowledge in society. Cross-References ▶ Cognitive Models of Learning ▶ Discourse ▶ Discourse Processes and Learning ▶ Knowledge Acquisition ▶ Knowledge and Learning in Natural Language ▶ Knowledge Integration ▶ Knowledge Representation ▶ Learning from Text ▶ Memory Dynamics ▶ Mental Models ▶ Mental Models in Discourse processing ▶ Naturalistic Epistemology 1005 D 1006 D Discourse in Asynchronous Learning Networks References Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59, 617–645. Bernecker, S., & Dretske, F. I. (Eds.). (2000). Knowledge: Readings in contemporary epistemology. Oxford: Oxford University Press. Britton, B. K., & Black, J. B. (Eds.). (1985). Understanding expository text. A theoretical and practical handbook for analyzing explanatory text. Hillsdale: Lawrence Erlbaum Associates. Britton, B. K., & Graesser, A. C. (Eds.). (1996). Models of understanding text. Mahwah: Lawrence Erlbaum Associates. Clark, H. H. (1996). Using language. Cambridge: Cambridge University Press. Graesser, A. C., & Bower, G. H. (Eds.). (1990). Inferences and text comprehension. The psychology of learning and motivation (Vol. 25). New York: Academic. Graesser, A. C., Millis, K. K., & Zwaan, R. A. (1997). Discourse comprehension. Annual Review of Psychology, 48, 163–189. Graesser, A. C., Gernsbacher, M. A., & Goldman, S. R. (Eds.). (2003). Handbook of discourse processes. Mahwah: Lawerence Erlbaum. Johnson-Laird, P. N. (1983). Mental models. Towards a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press. Kendeou, P., & van den Broek, P. (2007). The effects of prior knowledge and text structure on comprehension processes during reading of scientific texts. Memory & Cognition, 35(7), 1567–1577. Kintsch, W. (1991). The role of knowledge in discourse comprehension: A construction-integration model. In G. Denhiere & J. P. Rossi (Eds.), Text and text processing (pp. 107–153). Oxford: North-Holland. Kintsch, W. (1998). Comprehension. A paradigm for cognition. Cambridge: Cambridge University Press. McNamara, D. S., & Kintsch, W. (1996). Learning from texts: Effects of prior knowledge and text coherence. Discourse Processes, 22(3), 247–288. McNamara, D. S., & Magliano, J. (2009). Toward a comprehensive model of comprehension. Psychology of Learning and Motivation, 51, 297–384. Schiffrin, D., Tannen, D., & Hamilton, H. E. (Eds.). (2001). The handbook of discourse analysis. Malden: Blackwell Publishers. Van Dijk, T. A. (2008). Discourse and context. A socio-cognitive approach. Cambridge: Cambridge University Press. Van Dijk, T. A. (2009). Society and discourse: How context controls text and talk. Cambridge: Cambridge University Press. Van Dijk, T. A. (Ed.). (2011). Discourse studies. A multidisciplinary introduction. New, one-volume edition. London: Sage. Van Dijk, T. A., & Kintsch, W. (1983). Strategies of discourse comprehension. New York/Toronto: Academic. Van Oostendorp, H., & Goldman, S. R. (Eds.). (1999). The construction of mental representations during reading. Mahwah: Lawrence Erlbaum. Wilkes, A. L. (1997). Knowledge in minds. Individual and collective processes in cognition. Hove: Psychology Press. Discourse in Asynchronous Learning Networks ALLAN JEONG Educational Psychology & Learning Systems, Florida State University, Tallahassee, FL, USA Synonyms Computer-mediated communication Definition Discourse – communication of thought via written text and speech. In the field of linguistics, discourse is any unit of connected speech or writing longer than a sentence. Asynchronous learning – A network of people engaged in peer-to-peer interaction and information sharing via online resources (e.g., email, electronic mail lists, threaded discussion boards, blogs, wikis, twitter, document sharing systems) that overcome the constraints of time and place. Network – a collection of electronic tools (e.g., email, discussion boards, text messaging, video conferencing), and devices (e.g., computers, cell phones, mobile devices) interconnected by multiple communication channels (e.g., text, voice, visual) used to facilitate information sharing and communications. Discourse in asynchronous learning networks – The communicative behaviors exhibited in peer-topeer interactions observed in electronic media. Theoretical Background The earliest form of asynchronous learning was established with the introduction of distance education and correspondence courses in the early 1800s and increased access to postal mail. Although postal mail enabled learners to overcome the constraints of time and place, asynchronous learning was primarily a solitary activity where communication with instructors and other learners were limited by the cost and time delays associated with postal mail. With limited opportunities for learners to engage in discourse with instructors and peers, the instructional content and instructional activities were by necessity carefully developed and determined by the course instructor Discourse in Asynchronous Learning Networks and/or sponsoring program to ensure that learners are able to work independently to successfully achieve the learning objectives. As a result, asynchronous learning was primarily if not exclusively instructor-led. With advances in technologies over the last century, learners are gaining increased access to tools and resources that support multi-channel discourse (text, voice, visuals). The steady improvements in tools and tool integration enables better coordination, communication, and information sharing between peers and at the same time enables learners to pool their collective knowledge and experiences in ways that is decreasing learners’ reliance on the input and guidance of instructors. As a result, asynchronous learning not only incorporates methods that support self-directed learning but also incorporates methods that support collaborative learning. Collaborative learning is more a studentcentered than an instructor-led approach to learning given that the ▶ discourse between learners reflect the learner’s interests, motivations, prior knowledge and experiences, and ways of knowing – forming much of the foundation on which knowledge and meaning is constructed. Some of the common goals of discourse (when conducted within an asynchronous learning network or ALN) are to promote and increase the level of critical thinking, meaningful problem solving, and knowledge construction. With the development of the World-wide web, ALN discourse has become a key component in most online courses. As a result, online discourse has been the focus of much research among educational and instructional psychologists. The ultimate goal of the research is to examine and better understand how different tools can be used and further refined to facilitate discourse in ways that increases learning and performance. However, researchers have faced difficult challenges in the attempt to determine which tools help produce better discourse because there are too many independent variables that must be controlled – variables such as group size, composition of group, and nature of tasks. Furthermore, these variables interact in such ways that it is difficult if not impossible to establish cause and effect between choice of tools, quality of discourse, and learning outcomes. To address these challenges, researchers are examining how specific attributes of a tool changes and mediates the interactions exhibited in online discourse (Dillenbourg et al. 1996). In order to examine how tool D attributes mediate learner interactions, new conceptual frameworks, methods, and tools have been developed to analyze and/or model learner interactions and learning processes exhibited in the interactions. At the heart of this research is the issue of what (e.g., cognitive, meta-cognitive, social behaviors; quantitative vs qualitative; individual vs group; message vs sentence units; data from self-reports vs discussion transcripts) to examine and code from the discourse, and how to analyze the data (e.g., frequency counts, response probabilities, Markov chains). A myriad of models and approaches have been developed and used to elucidate, make more explicit, and operationally measure the form, function, and/or the dynamic and interactive nature of ALN discourse. Important Scientific Research and Open Questions Despite the complexity and scope of the research in this field, three critical overarching questions are identified and presented below – questions that reflect current challenges and suggest directions for future research. Included are brief descriptions of studies that illustrate how some of the latest tools and methods have been used to address these questions and that can be used to make further advancements in the field. What discourse models or typologies are most useful for identifying the interactions that produce higher gains in learning? Although an abundance of studies have developed and/or implemented various models to code and analyze online discourse (brief descriptions of some of the existing models presented in Marra et al. [2004]), the focus of the analysis needs to be centered foremost around the cognitive operations exhibited within each dialog move – cognitive operations that learners must perform to complete the learning task and achieve the desired learning outcome. Other dialog moves associated with the social and meta-cognitive behaviors (or any other dimensions of discourse) should be analyzed in terms of how they influence the sequence of cognitive actions exhibited by learners as they engage in discourse. For example, Garrison et al.’s (2010) structural equation analysis produced a model that revealed the extent to which learners’ social interactions and interactions with instructors impacted the cognitive interactions performed by learners. However, this study like most studies that examine online discourse did not determine how different types of 1007 D 1008 D Discourse in Asynchronous Learning Networks .01 .66 .73 ARG 143->153 .52 ARGc 32->21 BUT 239->77 .90 BUTc 174->44 .69 .82 .59 .75 .16 .09 EXPL 50->17 +.31 .18 +.28 .11 +.23 .18 .10 EVID 81->29 .14 Without conversational language EXPLc 37->13 .08 EVIDc 30->4 .25 .23 With conversational language Discourse in Asynchronous Learning Networks. Fig. 1 Transitional state diagrams illustrating the response patterns produced from messages with versus without conversational language. ARG = argument, BUT = challenge, EVID = supporting evidence, EXPL = explanation, “c” denotes a message presented in a conversational style, “+” denotes transitional probabilities that were significantly higher than expected (based on z-scores at p<0.01) cognitive interactions affect the learning outcome. In order to do this, particular methods must be used to precisely identify and convey the similarities and differences in interaction patterns produced by high versus low performing learners. Which methods and tools can identify, convey, and model interactions patterns and identify which interactions produce higher gains in learning? Given the complexity and dynamic nature of discourse, dialog move sequences do not always unfold in orderly and predictable ways. Soller (2004) believed that this is one reason why the simple frequencies of each dialog move performed by learners did not distinguish learners that scored high versus low on a posttest measuring knowledge acquisition. As a result, Soller incorporated a process-oriented approach that examined how interactions unfold over time by producing transitional state diagrams (often referred to as Markov chains) to convey how likely (or the probability) one dialog move was followed by another dialog move (e.g., inform, acknowledge, request information, discuss with doubt, agree). This interaction data combined with posttest scores were analyzed using multidimensional scaling to reveal three to four interactions (spanning from one to three conversational turns) that distinguished the groups that collectively scored high versus low on the posttest. Using a similar approach to examine how social interactions influence cognitive interactions, Jeong (2006) used transitional state diagrams (Fig. 1) to determine how the use of conversational language (e.g., making references to participants by name, saying thank you, and use of greetings and emoticons) affected the probabilities of certain responses elicited by arguments, challenges, explanations, and presentation of supporting evidence. The findings reveal that the (argument ! challenge ! explanation) interaction Discovery Learning was more likely to emerge from students’ interactions when students used conversational language while presenting arguments, challenges, and explanations. How do communication tools change and mediate the interactions in ways that produce higher gains in learning? Olson et al. (1992) conducted one study that examined both the relationship between tool and group interactions, and the relationship between group interactions and learning outcome. Their study compared the effects of using a shared document editor, ShrEdit, against the use of whiteboard with paper and pencil on group interaction patterns and performance on a group paper. Their findings revealed that students using ShrEdit produced significantly higher quality papers even though the interaction patterns produced in both groups (visually conveyed in transitional state diagrams) appeared to be very similar. However, Olson did not perform any statistical tests on the transitional probabilities (e.g., z-score tests) to determine which of the interaction patterns occurred at rates that were significantly higher or lower than expected. Nevertheless the study did find significant group differences in the amount of time spent on performing specific actions (discussing issues, actions, and alternatives) and some significant differences in the frequency of actions performed by the groups between conditions. Future efforts to integrate the methods and tools used in the studies described above will provide the basis to conducting more complete investigations into the relationship between technology, discourse, and learning. At the same time, future investigations will need to determine how differences in socio and cultural contexts affect how well discourse models can be used to accurately explain and predict learning outcomes. These efforts all together will produce the empirical research needed to arrive at a better understanding of communication technologies and how to refine the technologies to promote the type of discourse that can optimize learning processes and maximize learning outcomes. D ▶ Discourse and the Production of Knowledge ▶ Distance Learning ▶ Learning with Collaborative Mobile Technologies ▶ Online Collaborative Learning ▶ Rapid Collaborative Knowledge Building D References Dillenbourg, P., Baker, M., Blaye, A., & O’Malley, C. (1996). The evolution of research on collaborative learning. In E. S. P. Reiman (Ed.), Learning in humans and machine: Towards an interdisciplinary learning science (pp. 189–211). Oxford: Elsevier. Garrison, D. R., Cleveland-Innes, M., & Fun, T. S. (2010). Exploring causal relationships among teaching, cognitive and social presence: Student perceptions of the community of inquiry framework. Internet and Higher Education, 13, 31–36. Jeong, A. (2006). The effects of conversational styles of communication on group interaction patterns and argumentation in online discussions. Instructional Science, 34(5), 367–397. Marra, R. M., Moore, J., & Klimczak, A. (2004). Content analysis of online discussion forums: A comparative analysis of protocols. Educational Technology Research and Development, 52(2), 23–40. Olson, G., Olson, J., Carter, M., & Storrosten, M. (1992). Small group design meetings: An analysis of collaboration. Human-Computer Interaction, 7(4), 347–374. Soller, A. (2004). Computational modeling and analysis of knowledge sharing in collaborative distance learning. The Journal of Personalization Research, 14(4), 351–381. Discourse Processing ▶ Discourse and the Production of Knowledge ▶ Language/Discourse Comprehension Understanding and Discovery Learning HEINZ NEBER University of Munich (LMU), Munich, Germany Cross-References ▶ Collaboration Scripts ▶ Collaborative Learning ▶ Collaborative Learning Strategies ▶ Collaborative Learning Supported by Digital Media ▶ Computer-Supported Collaborative Learning ▶ Discourse 1009 Synonyms Example-based learning; Guided discovery learning; Inductive teaching; Inquiry learning; Learning by design; Learning by experimentation; Socratic questioning 1010 D Discovery Learning Definition Discovery Learning denotes a general instructional approach that represents the first broad development of constructivist learning for school-based learning environments. Jerome Bruner (1961) derived discovery learning from contemporary studies in cognitive psychology and stimulated the development of more specific instructional methods. The most important defining characteristic of discovery learning is that learners have to generate units and structures of abstract knowledge like concepts and rules by their own inductive reasoning about nonabstracted learning materials (Holland et al. 1986). Only such materials are provided by the learning environment. The learning materials may consist in examples of general concepts, cases for general approaches and procedures (e.g., teaching method, management style), ill-defined questions and kinds of situated problems (e.g., how to motivate students in a passive classroom), or phenomena that have to be causally explained by the learners (e.g., why a liquid gets hard). Another characteristic is the amount of guidance of the learner’s required inductive reasoning processes. In discovery learning situations, the level of guidance offered may vary adaptively, depending on the difficulty of the learning material, the complexity of the intended conceptual and procedural knowledge, and the cognitive and motivational prerequisites of the learners. For this reason, the level of guidance or structuredness of the learning environment is not fixed, but represents a variable, nondefining characteristic of discovery learning. This is a necessary statement because in some recent discussions, discovery learning has been confused with unguided instruction which has to be considered as a big flaw (Hmelo-Silver et al. 2007). In fact, in all actually implemented versions of discovery learning, the reasoning processes of the learners are considerably guided or scaffolded. Learning by examples as the probably earliest method of discovery learning exemplifies a high level of guidance. This method of discovery learning represents a rather direct instructional application of Bruner’s procedure for investigating concept attainment by children. Concept attainment requires the induction of defining and non-defining characteristics or attributes from examples or instances of the concept or category (e.g., concepts like pet, metacognition, cooperative learning). In the corresponding instructional method, carefully selected examples and non-examples are presented in planned and prescribed sequences (e.g., always beginning with a positive example). The learners analyze the features of each presented example, compare examples, decide on the definitional status of each feature, and get a feedback on the correctness of the decision. This cycle of searching, deciding, and testing is repeated until the learners are able to define the concept in terms of its induced abstract characteristics. Guidance is provided by presenting series of well-designed instances, by prescribing and facilitating the steps of searches and comparisons to the learners (e.g., by providing prompts and rubrics), as well as by providing feedback on the correctness of the assumptions and hypotheses of the learners. Learning by examples as a basic form of discovery learning belongs to the repertoire of well-established models of teaching that are derived from information-processing studies in cognitive psychology (Joyce and Weil 2008). Meanwhile, the spectrum of methods of discovery learning has been considerably expanded. The most widely used and investigated method is learning by experimentation (Neber 2010) or synonymously called scientific inquiry (Joyce and Weil 2008; de Jong et al. 2005). In experimenting, learners generate causal knowledge for explaining instructionally provided phenomena. In contrast to learning by examples, they do not get an automatic feedback on the correctness of their assumptions or hypotheses about causal variables. This method is challenging for learners because they additionally have to design situations (experiments) for testing their provisional explanations (hypotheses) on their own. In terms of Herbert Simon’s Dual Space Search Discovery (DSSD) model, rules (causal knowledge) and examples or instances to test the rules have to be coordinatively generated by the learners in laboratory-based hands-on experimentation or in technology-based virtual learning environments (e.g., microworlds)(de Jong et al. 2005). Learning by Designing or Design-based Learning represents an even more challenging discovery learning method. In contrast to the other two methods, learners do not get issues (e.g., examples, phenomena) that have to be retrospectively analyzed, defined, or causally explained. With this method, learners have to create a product (e.g., a machine, program, a text, or a model) that attains prescribed functions or meets given criteria and constraints. For designing products that attain the Discovery Learning provided prespecified functions or criteria, domainspecific knowledge has to be searched and generated by the learners. The shortly described methods of discovery learning may be realized in isolation. On the other hand, these methods and, in particular, learning by experimentation and by designing, constitute components in more complex instructional approaches like ProjectBased Learning or Problem-Based Learning (PBL). As in stand-alone solutions as well as in integrated instructional approaches, methods of discovery learning can be implemented in learning environments that are augmented by educational technology and by collaboratively distributed knowledge-generation processes. Both, computer-based technology and collaboration scripts may be used as tools and supports for the inductive reasoning processes that are required in all methods of discovery learning. Theoretical Background In general, discovery learning expands the range of cognitive processes for learners and contributes to promote cognitively driven activities of the learners. Thus, the implementation of discovery learning methods may contribute to attain higher levels of thinking that represents an important general goal of education. The cognitive processes that are required in all methods of discovery learning may be conceived and investigated on different levels of decomposition. On a macrolevel, the sequence of cognitive or inductive reasoning processes is analyzed in terms of inquiry cycles, and analogously as investigation webs, learning cycles, or as cyclical regulatory phases (Neber 2010). These cycles consist in sequences of macroprocesses like accessing prior knowledge, questioning, hypothesizing, searching for information, deciding, explaining, and reviewing. This expanded range of processes that is required to generate knowledge by the learners is difficult, at least for novice learners. Many studies revealed that in discovery-learning environments, learners often remain non-intentional, do not frame questions and hypotheses, have deficits in coordinating evidence and explanations, e.g., do not provide arguments for their explanations in terms of available data or by referring to the generated domain-specific knowledge. In these cases, discovery learning remains ineffective. As a consequence, some of these macroprocesses of inquiry cycles have further been decomposed and D investigated on microlevels that inform about the procedural complexity and provide the basis for developing instruments and tools for procedural support and scaffolding. The actual research focuses in particular on questioning, hypothesizing, argumenting, and explaining (Neber and Anton 2008). Principles for guiding and supporting discovery processes on different levels have been theoretically derived from conceptions of learning, and further developed by empirical studies. A first principle is based on positive findings by presenting discovery tasks in stages of increasing complexity of the concepts, rules, or theories that the learners have to generate (e.g., well-established approaches are progressive inquiry, or the concept of model progression in technology-based virtual learning environments; de Jong et al. 2005). A second principle consists in the stepwise, adaptive, and systematic fading or reduction of structure provided for the macrophases of inquiry cycles (e.g., as proposed by the National Science Foundation for discovery learning by experimentation in all science subjects). An example for a fading procedure for framing questions by the learners is first prescribing a question to the learners, then offering alternative questions for selecting, followed by only providing a script or prompt for formulating own questions by the learners (Neber and Anton 2008). For realizing these principles in guiding and structuring cycles of discovery processes, scaffolding techniques and support tools have been developed. The broad range of such techniques can be distinguished into three categories (Neber 2010): Techniques for supporting knowledge-generation processes (e.g., prompts like question stems, prespecified templates for hypotheses, planning grids for exploring the spaces of knowledge and examples, or self-explanation prompts and modeling tools), techniques for supporting regulative processes (e.g., metacognitive and reflection prompts), and techniques for supporting collaborative activities (e.g., constructive controversy and other cooperative scripts for distributing roles for knowledge-generation and regulative processes). Computer-based learning environments offer special advantages for proximally adapting techniques and scaffolds to online information about frequencies and qualities of learners’ activities (e.g., SimQuest as an example for designing discovery-learning environments with integrated guidance and support facilities). 1011 D 1012 D Discovery Learning Important Scientific Research and Open Questions Bruner (1961) argued that discovery learning is more effective than what is called didactic teaching or nonconstructivist receptive learning. Positive effects should be attained for memorization of knowledge, for solving transfer problems, for general learning or self-regulation strategies, and for intrinsic motivation. Bruner derived these assumed effects from laboratory-based studies in cognitive psychology. However, the range of variables and their interactions are much less constrained in learning environments like classrooms. This may explain the many discrepant findings about such effects of discovery learning in school-based investigations. Compared to other instructional approaches (e.g., direct instruction), the attainment of positive cognitive and motivational effects in discovery learning may be dependent on further conditions (of the learners and the instructional environment) and seems to require additional resources (e.g., more time-consuming, more teacher preparation, and more other costs). The ongoing discussion about guided-versus-pure discovery learning is a consequence of the discrepant findings and indicates the need for further studies on what and how to guide and scaffold in all forms of discovery learning, in particular if it is conducted under less structured conditions. A second open question is if the assumed positive effects of discovery learning have really been investigated by adequate research designs and measurement instruments used to assess the effects. What seems to be neglected are long-term effects, effects on general competencies, including the self-system development of the learners, and effects on the attainment of non-inert knowledge, including its structure. A third category of open questions concerns the context or environment of discovery learning. Only few studies have focused on collaborative discovery learning, and on how to adequately integrate discovery learning with and into other instructional approaches. Even that a considerable spectrum of scaffolding and support tools have already been developed and tested, further investigations seem to be reasonable. In particular, how such tools relate and interact with inquiry cycles, and determine cognitive processes on microlevels of decomposition. Finally, further studies on teacher development and teacher education for discovery learning may help in implementing this instructional approach and in attaining the expected effects. Cross-References ▶ Concept Formation: Characteristics and Functions ▶ Constructivist Learning ▶ Creative Inquiry ▶ Generative Learning ▶ Humanistic Approaches to Learning ▶ Inductive Reasoning ▶ Learning Cycles ▶ Learning from Questions ▶ Problem-Based Learning References Bruner, J. S. (1961). The act of discovery. Harvard Educational Review, 31, 21–32. De Jong, T., Beishuizen, J., Hulshof, C., Prins, F., van Rijn, H., van Somereren, M., Veenman, M., & Wilhelm, P. (2005). Determinants of discovery learning in a complex simulation environment. In P. Gärdenfors & P. Johansson (Eds.), Cognition, education, and communication technology (pp. 257–284). Mahwah: Lawrence Erlbaum. Hmelo-Silver, C. E., Duncan, R. G., & Chinn, C. A. (2007). Scaffolding and achievement in problem-based and inquiry learning: A response to Kirschner, Sweller, and Clark (2006). Educational Psychologist, 42, 99–107. Holland, J. H., Holyoak, K. J., Nisbett, R. E., & Thagard, P. R. (1986). Induction. Processes of inference, learning, and discovery. Cambridge: MIT Press. Joyce, B., & Weil, M. (2008). Models of teaching (8th revised ed.). Boston: Ally & Bacon. Neber, H. (2010). Entdeckendes Lernen (Discovery learning). In D. H. Rost (Ed.), Handwörterbuch Pädagogische Psychologie (Dictionary of educational psychology) (pp. 124–132). Weinheim: Beltz. Neber, H., & Anton, M. A. (2008). Promoting pre-experimental activities in high-school chemistry: Focusing on the role of students’ epistemic questions. International Journal of Science Education, 30, 1801–1821. Internet Sources General description and illustration of discovery learning; live from classroom. Discovery Learning Center. Retrieved from, http:// www.youtube.com/watch?v=bcuyHnVRJcE (for use, observe YouTube regulations. Retrieved from, http://www.youtube.com/ t/terms). Example of discovery learning by experimentation (scientific discovery): SimQuest as an authoring system for simulation-based discovery-learning environments. Retrieved from, http://www. simquest.nl/simquest/index.htm (developed by T. De Jong et al., University of Twente, NL). Example of discovery learning by examples: Concept attainment model of teaching [Powerpoint presentation]. Retrieved from, http://imet.csus.edu/imet3/drbonnie/portfolio/conceptattain/ conattainppt.pdf (designed by B. Drumright according to Joyce, & Weil, 2008) Discrimination Learning Model Discovery Learning Model ▶ Guided Discovery Learning Discrimination ▶ Big Five Personality and Prejudice Discrimination Learning Model JONAS ROSE1, ROBERT SCHMIDT2 1 The Picower Institute for Learning and Memory, Massachusetts Institute of Technology, Cambridge, MA, USA 2 Department of Psychology, University of Michigan, Ann Arbor, MI, USA Definition The term “discrimination learning” refers to the formation of associations between different stimuli and corresponding outcomes or behaviors. It enables animals to choose different responses for different stimuli. Models of discrimination learning address (a) the algorithmic principles underlying the implementation of discrimination learning in the brain and (b) the related neurophysiological processes that enable the implementation. Theoretical Background Learning to discriminate objects, stimuli, situations, etc., is a fundamental ability of all animals including humans. Distinguishing good things from bad things is essential for survival and any directed behavior. Scientific research on the governing principles of discrimination learning has a long tradition in neuroscience. Examples include forms of classical and operant conditioning (▶ differential conditioning), where arbitrary sensory stimuli (so-called conditioned stimuli) and unconditioned stimuli are presented to the animal. Unconditioned stimuli can be rewarding (e.g., food) or aversive (e.g., an electric shock). In classical D conditioning, the conditioned stimulus is always followed by the unconditioned stimulus. In operant conditioning, the animal is required to perform some action (e.g., a lever press) in order to receive the unconditioned stimulus (i.e., the reward). In differential conditioning, only some stimuli are followed by unconditioned stimuli, but others are not. For example, when the animal sees a red square appearing on a monitor, it always receives some food reward a short time later. However, when a green circle appears instead on the monitor, the animal does not receive the food reward. Thereby, in such paradigms, the animal learns to discriminate stimuli based on their contingencies with the unconditioned stimuli, so that it will respond differently to a rewarded stimulus than to an unrewarded stimulus. In this example, the animal might increase saliva production only if the red square appears on the monitor. Most behavioral studies require some form of discrimination learning and many behavioral paradigms have been developed to study behavioral and neural properties of discrimination learning. The most fundamental paradigm used for discrimination learning is the go/nogo procedure, a form of operant differential conditioning. In this paradigm, the subject is rewarded for responding to some stimuli (go) and not rewarded for responding to other stimuli (nogo). In some variants of this paradigm, not responding to nogo stimuli is also rewarded. Another common paradigm employing discrimination learning is choice discrimination. In choice discrimination, two types of stimuli (one paired with reward and one not paired with reward) are presented simultaneously. The animal learns to choose stimuli associated with reward over the unrewarded stimuli. Countless variations of these paradigms exist, including probabilistic reward delivery (e.g., one stimulus yields a reward in 20% of the trials, the other stimulus in 80% of the trials), switching task contingencies (the rewarded stimulus becomes the unrewarded one and vice versa), and differential reward delays or magnitudes (one stimulus yields a small reward immediately, the other stimulus yields a larger reward after some time). In discrimination learning, subjects learn about the contingencies of specific entities. For efficient behavior, however, discrimination learning alone would leave animals inflexible and unable to cope with the sensory complexity of their environment. In order to behave 1013 D 1014 D Discrimination Learning Model efficiently, animals must be able apply their knowledge to novel stimuli. They do this using ▶ stimulus generalization (see also Kehoe 2008) or by ▶ categorization (Cook and Smith 2006). Only generalization and categorization allow to respond efficiently to unfamiliar stimuli and to behave adequately in a complex everchanging environment. Important Scientific Research and Open Questions In general, discrimination learning is subject to three (overlapping) lines of scientific research. The first line of research tries to identify the algorithm that underlies discrimination learning. Here, models of discrimination learning implement different learning algorithms and compare them to the behavior of the animal. Different learning algorithms employ, for example, different learning strategies and make different predictions on animal behavior. Discrimination learning can easily be implemented with machine reinforcement-learning algorithms, such as the prominent temporal-difference algorithm (Sutton and Barto 1998). This algorithm uses a reward prediction-error as a teaching signal to detect which stimuli or actions are consistently followed by rewards. Thereby, the algorithm is able to predict future rewards and also to direct decision-making toward the most rewarding actions. Recently, Rose et al. (2009) found that standard reinforcement-learning algorithms can account for the effect of the magnitude of rewards on learning. In reinforcement-learning algorithms, the magnitude of the reward affects how fast a task is learned. Therefore, if animals use a similar algorithm, the learning speed should similarly depend on the reward magnitude. In fact, Rose et al. (2009) showed that animals learned a task faster when they received a larger amount of food as a reward, compared to a task where they received a smaller amount of food as a reward. Thus, the predictions of standard reinforcement-learning algorithms matched the animal behavior. The second line of research on discrimination learning tries to identify the neurophysiological processes in implementing some form of reinforcement learning. In this context, models of discrimination learning hypothesize on the role of different neural structures. For example, the dopaminergic system is a good candidate for discriminating rewarding from neutral stimuli (see Phillmore 2008). Dopamine is a modulatory neurotransmitter that plays an important role in synaptic plasticity. A series of studies supports the notion that the activity of dopamine neurons shows a resemblance to a reinforcement-learning teaching signal (e.g., see Schultz et al. 1997). Effectively, dopamine release might tag an arbitrary sensory stimulus as rewarding, and thereby play a crucial role in learning to discriminate rewarding from nonrewarding stimuli. While the exact role of dopamine in reward-related learning is still under debate, a corresponding teaching signal for aversive learning has not yet been found. The third line of research aims to reveal how discrimination learning is related to the more complex process of categorization. Mainly two models have been discussed in the past, assuming that categorization may be based on exemplars or on a prototype. Exemplar-based models assume that categorization is achieved through an extended form of discrimination learning: by learning a set of stimuli, or exemplars, that, if taken together, represent the entire category. New stimuli can then be categorized by comparing them to the set of exemplars. Prototype-based models, on the other hand, assume that during learning of a new category one abstract prototype is generated that represents all of the defining features of the category. Recent research suggests that none of the models alone can fully account for learning to categorize and that both processes are involved in categorization (Cook and Smith 2006). While early stages of learning are dominated by prototype-based categorization, later in learning exemplars are added, which might replace the prototype. Cross-References ▶ Adaptive Learning Systems ▶ Anticipatory Learning ▶ Association Learning ▶ Categorical Learning ▶ Computational Models of Human Learning ▶ Formal Learning Theory ▶ Neural Networks of Classical Conditioning ▶ Reinforcement Learning References Cook, R. G., & Smith, J. D. (2006). Stages of abstraction and exemplar memorization in pigeon category learning. Psychological Science, 17, 1059–1067. Disposition to Understand Kehoe, E. J. (2008). Discrimination and generalization. In J. H. Byrne & R. Menzel (Eds.), Learning and memory: A comprehensive reference (Vol. 1, pp. 123–150). Oxford: Elsevier. Phillmore, L. S. (2008). Discrimination: From behaviour to brain. Behavioural Processes, 77(2), 285–297. Rose, J., Schmidt, R., Grabemann, M., & Güntürkün, O. (2009). Theory meets pigeons: The influence of reward-magnitude on discrimination-learning. Behavioural Brain Research, 198, 125–129. Schultz, W., Dayan, P., & Montague, P. R. (1997). A neural substrate of prediction and reward. Science, 275, 1593–1599. Sutton, R. S., & Barto, A. G. (1998). Reinforcement learning: An introduction. Cambridge, MA: MIT Press. Discriminative Law of Effect ▶ Matching Discussion Group ▶ Group Dynamics and Learning D Dishabituation ▶ Habituation and Sensitization D Disinhibition ▶ Impulsivity and Reversal Learning Disinterest ▶ Boredom in Learning Disorder of Written Expression ▶ Language-Based Learning Disabilities Disengaged Learners ▶ Apathy in Learning Disposition General, relatively stable inclination to approach new learning tasks and situations in a particular way. Disengagement ▶ Boredom in Learning Disequilibrium Disequilibrium is a situation where internal and/or external forces prevent system equilibrium from being reached or cause the system to fall out of balance. This can be a short-term by-product of a change in variable factors or a result of long-term structural imbalances. Cross-References ▶ Cognitive Conflict and Learning 1015 Cross-References ▶ Attitudes – Formation and Change ▶ Personality Effects on Learning Disposition to Understand Disposition to understand is a form of thinking disposition identified in university education that suggests a continuing, strong form of a deep approach to learning. It brings together ability and the willingness to use it, together with sensitivity to context that allows a person to recognize opportunities to develop and use their current understanding. 1016 D Dispositional Interest Dispositional Interest Dispositional interest is a particular way of conceptualizing personal or individual interests as a relatively stable characteristic of a person or a general orientation to specific actions. Dispositional interests are considered to be part of a child’s self-concept that influences participation and learning in activities where the child has opportunities to initiate interactions with the social and nonsocial environment. ● Dispositions for Learning JEANNE ELLIS ORMROD School of Psychological Sciences (Emerita), University of Northern Colorado, Greeley, CO, USA ● Synonyms Habits of mind Definition A ▶ disposition is a general, relatively stable inclination to approach new learning tasks and situations in a particular way. Researchers have identified a variety of dispositions that have an impact on learning and performance, sometimes for the better and sometimes for the worse. Typically, dispositions reflect an intermingling of cognitive processes, motivational factors, and personality characteristics. To date, the disposition construct has been used primarily within the context of human (rather than nonhuman animal) learning. ● Theoretical Background Psychologists have long observed individual differences in how people approach and thereby benefit from new activities and learning opportunities. Some of these individual-difference variables are relatively stable over time and across diverse contexts. Some examples of these variables are described and discussed below: ● Stimulation seeking: Some people are more inclined than others to actively seek out new information and learning experiences. This disposition may ● reflect individual differences in the need for arousal and in the optimal level of arousal at which people feel most comfortable. For example, any given level of stimulation might feel boring to some individuals, comforting to others, and overwhelming to still others. Need for cognition: Just as people vary in their need for stimulation in general, they also vary in the extent to which they actively seek out and engage in challenging cognitive tasks. For example, a person with a strong need for cognition might avidly read books on a broad range of topics, eagerly seek out brainteaser puzzles, or voluntarily engage in debates about controversial issues. This disposition, too, may reflect varying levels of the need for arousal. Conscientiousness: Some people consistently tackle learning tasks and activities in a deliberative, careful, thorough manner. For example, they are apt to plan ahead, and they can be relied on to get a job done. Although conscientiousness does, in general, lead to enhanced learning, in its extreme form – perfectionism – it can lead to debilitating anxiety levels and overly harsh self-evaluations. Conscientiousness is one of the “Big Five” orthogonal personality traits that some personality theorists have described. Learned industriousness: People differ in the degree to which they persevere in attempting to master a new topic or skill even when they need to exert considerable effort or face obstacles to their success. People with a low level of learned industriousness give up quickly in the face of failure; those with a high level “try, try again.” Quite possibly people acquire learned industriousness when past experiences have taught them that successes often come only with effort and effective strategies. In contrast, people who have previously been accustomed to easy, effortless successes (e.g., as might be true for especially gifted individuals) may abandon an endeavor at the first stumbling block and may possibly develop learned helplessness about the domain involved. Open-mindedness: People also differ in the extent to which they can flexibly consider alternative, potentially contradictory perspectives and evidence. Open-minded individuals typically suspend judgment about topics until they have enough Dissatisfaction information to make an informed decision; in some cases they delay on choosing among conflicting ideas indefinitely. In contrast to open-mindedness, people with a strong need for closure tend to jump very quickly to conclusions about which facts, perspectives, theories, and so on are accurate or reasonable and which are not, often with little or no consideration of supporting or contradictory evidence. ● Critical thinking : Critical thinking involves not only a set of fairly sophisticated cognitive skills but also a disposition to use those skills. People who have a disposition for critical thinking consistently evaluate new information and arguments in terms of their accuracy, logic, and credibility. In contrast, noncritical thinkers tend to accept new ideas at face value, with little or no reflection and analysis. ● Consensus seeking : People with a high need for consensus seeking strive to synthesize diverse perspectives into a more complex understanding of a topic or phenomenon than any single perspective could provide. People without this disposition are more likely to assume that diverse perspectives must necessarily be mutually exclusive and thus are apt to determine that only one perspective can have validity. Important Scientific Research and Open Questions Studies of the nature and effects of dispositions have been relatively small in number, but it has become increasingly clear that these cognition–motivation– personality blends can have a significant impact on learning. In fact, dispositions sometimes overrule intelligence in their impact on learning and achievement (Perkins and Ritchhart 2004). For example, people who show a consistent tendency to seek out physical and cognitive stimulation tend to learn more from what they read and to be higher achievers in instructional settings (Cacioppo et al. 1996; Raine et al. 2002). And people who are predisposed to critically evaluate new information are more likely to revise existing beliefs to be in line with scientifically validated theories (Southerland and Sinatra 2003). The origins of various dispositions remain largely unexplored. Possibly inherited temperamental differences play a role in such dispositions as D stimulation seeking and need for cognition (Raine et al. 2002). It appears, too, that people’s beliefs about the nature of particular academic disciplines and about knowledge in general – that is, people’s epistemological beliefs (also known as epistemic beliefs) – predispose them to think critically and analytically about diverse explanations, on the one hand, or to “zero in” very quickly on just a single explanation, on the other (DeBacker and Crowson 2009). Culture, too, seems to play a role; for example, whereas some cultural groups encourage critical thinking, others emphasize that wisdom is best obtained from authority figures or religious teachings and that certain perspectives must never be questioned (Kuhn and Park 2005). Cross-References ▶ Conditions of Learning ▶ Epistemological Beliefs and Learning ▶ Openness to Experience References Cacioppo, J. T., Petty, R. E., Feinstein, J. A., & Jarvis, W. B. G. (1996). Dispositional differences in cognitive motivation: The life and times of individuals varying in need for cognition. Psychological Bulletin, 119, 197–253. DeBacker, T. K., & Crowson, H. M. (2009). The influence of need for closure on learning and teaching. Educational Psychology Review, 21, 303–323. Kuhn, D., & Park, S.-H. (2005). Epistemological understanding and the development of intellectual values. International Journal of Educational Research, 43, 111–124. Perkins, D., & Ritchhart, R. (2004). When is good thinking? In D. Y. Dai & R. J. Sternberg (Eds.), Motivation, emotion, and cognition: Integrative perspectives on intellectual functioning and development (pp. 351–384). Mahwah: Erlbaum. Raine, A., Reynolds, C., & Venables, P. H. (2002). Stimulation seeking and intelligence: A prospective longitudinal study. Journal of Personality and Social Psychology, 82, 663–674. Southerland, S. A., & Sinatra, G. M. (2003). Learning about biological evolution: A special case of intentional conceptual change. In G. M. Sinatra & P. R. Pintrich (Eds.), Intentional conceptual change (pp. 317–345). Mahwah: Erlbaum. Dissatisfaction ▶ Boredom in Learning 1017 D 1018 D Dissociation Dissociation Impairment in one aspect of a normally integrated cognitive function, with preservation of other aspects of the function, as a result of a lesion in the brain. A study of lesion-induced dissociations (or fractionations) is helpful in suggesting whether the lesioned brain structure is a necessary substrate of the function in question. Definition Distance learning is an outcome of distance education. Where learners and teachers are separated by geographical and/or temporal distance, a form of mediated learning can be achieved using a combination of technologies. Distance learning can be differentiated from e-learning, which may be undertaken at a distance or contiguously, or as a combination of both (blended learning). Moore and Kearsley define distance education as: " Dissonance Reduction Theory Cognitive dissonance is a communication theory developed by Leon Festinger (1957) which contrasts with behaviorist conditioning or reinforcement theories. The dissonance reduction theory considers individuals as purposeful decision makers who strive for harmony in their beliefs. If presented with decisions or information that evoke dissonance, individuals apply dissonance reduction strategies in order to regain a state of equilibrium, especially if the dissonance affects their self-esteem. References Festinger, L. (1957). A theory of cognitive dissonance. Stanford: Stanford University Press. Distance Education ▶ Affective and Cognitive Learning in the Online Classroom ▶ Distance Learning ▶ Online Learning Distance Learning . . .planned learning that normally occurs in a different place from teaching and as a result requires special techniques of course design, special instructional techniques, special methods of communication by electronic and other technology, as well as special organizational and administrative arrangements. (Moore and Kearsley 1996, p. 2) Some of the earliest forms of distance education, known as correspondence courses, were achieved using a combination of printed material and the postal service. An early example of this was the Pitman shorthand course set up in Victorian Britain, but some argue that correspondence courses can be traced back as far as the instructional writings (epistles) of St Paul to the early Christian church. In current practice, distance learning may be supported through a combination of technologies that include broadcast TV and radio, printed materials, Web-based instruction, videoconferencing, and mobile communication. A number of new tools, notably social web services including wikis, blogs, podcasts, and social networking are also being investigated (Wheeler et al. 2008; Selwyn and Grant 2009). In the most generally accepted approaches to distance education, the emphasis has shifted away from the institute and onto the learner. Learning opportunities are provided at convenient times and places for the learner, rather than for the institution. Keegan (1990) has identified four key elements of distance learning: ● The teacher is separated from the student by distance. STEVE WHEELER University of Plymouth, Plymouth, Devon, UK ● The influence of an educational organization and the provision of student evaluation. ● The use of educational media to carry the course content. Synonyms Blended learning; Correspondence courses; Distance education; Distributed learning; Remote education ● Provision of two-way communication between teacher and student and between student and student. Distance Learning Theoretical Background Some see distance learning as an extension of traditional provision, and some may even view it as a means to widen traditional student catchment areas. Often distance learning methods are employed as a way to improve access for previously disenfranchised individuals. However, such views may be an oversimplification. In the past two decades, distance education has become an important discipline in its own right and a significant educational movement, as the vast growth in the literature confirms (Bernath et al. 2009). Indeed, there are many specialist peer-reviewed academic publications in circulation and the growth of interest in distance learning can be evidenced in the high attendance at international distance education and educational technology conferences around the world in recent years. Hoffman et al. (2000) argue that regardless of the time and place constraints imposed upon distance learners, the goal of distance education is still to bring about positive changes in student behavior just as with traditional on campus education. Distance education can and does promote positive changes. Instruction becomes more learner centered, with students enjoying access to learning events and resources that can be adjusted to meet their individual learning needs and styles (Simonson et al. 2000). Distance education, when managed effectively, can provide remote learners with an equivalent quality of experience to learners studying in more traditional settings. Important Scientific Research and Open Questions Distance education is often characterized by the extent to which students can choose their mode, place, and pace of study. This kind of flexibility is almost unheard of in many traditional forms of education, and questions arise about how much autonomy students should be given by the institution, and to what extent student agency can be reconciled with the requirements and regulations of the institution. Distance learning can be conceptualized as an activity that is location and time independent, encouraging students to assume the responsibility for their own learning. In many cases, the teacher is more remote and less accessible than the learning materials, acting as another learning resource rather than as a central component in the learning process. The extent to which the available D learning materials will be used is dependent upon how highly the students value their usefulness. The learning materials may often be accessed and read in whatever order and depth the student chooses, particularly if they are presented in web-based formats. These attributes run counter to the traditional model, where the course material selected by the teacher is transmitted to the student sequentially leaving little room for student agency (Valjataga and Laanpere 2010). The tension between the two models requires investigation. Those institutions that allow students complete autonomy within the learning process are few and far between. Those that do offer complete freedom generally remove rules such as completion times and set marking periods, preferring to steer clear of any intervention that would constrain freedom in learning. Opposed to this approach are those institutions that insist upon a semblance of structure within the freedom that derives from distance learning. Such course providers generally consider the imposition of some structure as a means of preventing academic failure. It is well documented in the literature that there can be high levels of attrition rates amongst distance learning populations. Some have speculated that this is due to the reduced social contact with tutors and peers, resulting in loss of motivation. Another theory is that students who study away from an institution may be disadvantaged through lack of access to the vital learning resources enjoyed by on-campus learners. Other issues under investigation also center upon student experiences, where the transactional distance, or the “instructional gap” may play a part in creating a psychological distancing between students and their teachers, leading to misunderstandings and perceptions of social isolation (Moore and Kearsley 1996). There is also a question whether distance learning is more or less effective than traditional, campus-based learning. Several sources indicate that there is no significant difference between the two modes of learning (Merisotas and Phipps 1999). However, the recent emergence of Web 2.0-based tools and services (social networking, wikis, blogs, and podcasts) has prompted increased research activity into the use of social media and their effectiveness in distance education. Furthermore, the growing number of user-generated content repositories such as Wikipedia has raised concerns over accuracy and veracity of information found on the web, and draw new attention to the blurring of boundaries 1019 D 1020 D Distant Associations between what it means to be a knowledgeable contributor and a credentialed expert. A number of tensions therefore exist between an idealistic approach to distance education where students are given complete control over their own study schedules, and can select their own personal media and technologies; and a practical approach where the learners are provided with a certain amount of structure designed to maintain the impetus of their studies and to motivate them toward completing their assignments within a set time. It is probable that many distance educators will opt for the middle ground and borrow from the best of both philosophies to provide the best balance of freedom and control. Cross-References ▶ Distributed Technologies ▶ Tele-Learning Distinction ▶ Simultaneous Discrimination Learning in Animals Distributed Cognition ▶ Shared Cognition ▶ Social Construction of Learning Distributed Intelligence/ Cognition A shared task between the learner and another person, tool, artifact, or resource. References Bernath, U., Szucs, A., Tait, A., & Vidal, M. (Eds.). (2009). Distance and e-learning in transition. London: Wiley. Hoffman, S. Q., Martin, M. S., & Jackson, J. E. (2000). Using the theory of equivalency to bring onsite and online learning together. Quarterly Review of Distance Education, 1, 327–335. Keegan, D. (1990). Foundations of distance education. London: Routledge. Merisotas, J. P., & Phipps, R. A. (1999). What’s the difference? Outcomes of distance v. traditional classroom-based learning. Change, 31, 13–17. Moore, M. G., & Kearsley, G. (1996). Distance education: A systems view. Belmont: Wadsworth. Selwyn, N., & Grant, L. (2009). Researching the realities of social software: An introduction. Learning, Media and Technology, 34, 79–86. Simonson, M., Smaldino, S., Albright, M., & Zvacek, S. (2000). Teaching and learning at a distance: Foundations of distance education. Upper Saddle River: Merrill. Valjataga, T., & Laanpere, M. (2010). Learner control and personal learning environment: A challenge for instructional design. Interactive Learning Environments, 18, 277–292. Wheeler, S., Yeomans, P., & Wheeler, D. (2008). The good, the bad and the wiki: Evaluating student generated content for collaborative learning. British Journal of Educational Technology, 39, 987–995. Distant Associations ▶ Learning Nonadjacent Dependencies Distributed Learning JAMIE KIRKLEY Information in Place Inc., Indiana University Research Park, Bloomington, Indiana, USA Usually distributed learning or spaced learning is defined as opposed to massed learning. Distributed learning means that the material to be learned is distributed over a long period of time so that the learner must integrate the various separated parts of material into a unique entity. Contrarily, massed learning means that the material to be learned is provided within a short period of time. Distributed learning is grounded on the assumption that long-term memory will be improved when there is more time between acquisition and retrieval of information. Accordingly, it has been argued (Litman and Davachi 2008) that it would be better for exams to be taken after a break than before, assuming there was a review before the exams, because of the spacing effect. Closely related with distributed or spaced learning is the idea of technology-based distributed learning environments which integrate the interactive Distributed Technologies capabilities of networking, computing, and multimedia with learner-centered collaboration and discovery learning. It can be argued that distributed learning environments require not only spaced learning but also information aggregation aiming at collecting relevant information from multiple sources (Dutta et al. 2005). Cross-References ▶ Distance Learning References Dutta, P. S., Jennings, N. R., & Moreau, L. (2005). Cooperative information sharing to improve distributed learning in multiagent systems. Journal of Artificial Intelligence Research, 24, 407–463. Litman, L., & Davachi, L. (2008). Distributed learning enhances relational memory consolidation. Learning and Memory, 15(9), 711–716. Distributed Learning Environments ▶ Interactive Learning Environments Distributed Learning Model ▶ Advanced Distributed Learning Distributed Practice ▶ Trial-Spacing Effect in Associative Learning Distributed Scaffolding Multiple forms of support in the forms of tools, artifacts, resources, and people within the classroom. D 1021 Distributed Technologies JOSEPH PSOTKA Basic Research Unit, US Army Research Institute for the Behavioral and Social Sciences, Arlington, VA, USA D Synonyms Adaptive intelligent web–based teaching and learning; Asynchronous learning; Blended learning; Distance learning Definition Distributed technologies take advantage of computational environments to bring students and teachers together, either synchronously or asynchronously, in ways that bring all the advantages of face-to-face collaboration, along with the augmented capabilities of computer mediation, analysis, search of online resources, and faithful recording of all interactions. The Internet has encompassed older distributed technologies, such as video, print, movies, radio, and television, and integrated them thoroughly into one medium, so that Web 2.0 (or even 3.0) technologies now constitute the core of distributed learning and blogging, videoconferencing, and e-mail; all parts of distributed technologies for teaching and learning. However, these technologies are continuing to transform at a rapid pace, with handheld, wireless devices creating the most profound opportunities and issues for the next few years. Theoretical Background Distributed technologies, particularly those on the Internet (Web 2.0 and 3.0) are the hottest innovation in education, and the source of new online industries, whether for full-fledged higher education Universities or tools and technologies to support classroom instruction. Sadly, many thousands of new web-based educational applications are little more than static web pages of hypertext that are widely giving computer-based learning a bad name. What is possible and what is actual are divided by a coulisse of grand proportions. Prior to the easy distributed access provided by the Internet, computer-based education was dominated by single user, standalone systems, where the computer had only to build a single user model, provide guidance 1022 D Distributed Technologies and problem-solving assistance to a manageable set of errors and misconceptions, and guide the course of instruction through only one level and area of a curriculum, usually in one class or one schoolhouse. Now with distributed technologies on the web, possibilities and problems have multiplied manifold. The new critical research issues center around cooperative and collaborative learning, but not of facts and carefully digested knowledge; but instead, of creativity, investigation, connection, integration, and synthesis. Problem solving and cognitive flexibility have become the driving force of educational research using distributed technologies, but remain largely invisible in educational practice. As part of the leading edge of implementing these technologies in classrooms, learning management systems (LMS) have to integrate and manage students nationwide. Increasingly LMS have to do more than simply tabulate grades and test results for teachers. Instead they must provide efficient communication among students, through blogs (web logs of written materials), wikis, and threaded discussion systems, as well as provide teaching and learning tools of increasing sophistication. Social networking aspects of instruction and learning have become paramount within LMS, with group projects, WebQuests, and active online engagement that is the hallmark of constructivist learning activities. LMS are continuing to undergo technological improvements in order to provide support for online collaboration and knowledge construction. Distributed technologies provide the opportunity to enrich the range and immediacy of resources for students in conjunction with interactive classroom lectures and activities. Naturally, this opportunity increases in scope with increasing age and sophistication of students from elementary, secondary, and higher education. Distributed technology creates the unique and largely unexplored possibility of bringing the real world into the classroom and taking the classroom into the real world. The additional resources it brings range from interactivity based on multimedia and online semantic content to interactivity based on peers, other cultures, and world-class experts. Given the richness of these possibilities, it is little wonder that educational systems have only explored a small part of this vast space. One of the barriers to its effective implementation is just this richness, the overload of information it offers, and the difficulty of making the right information at the appropriate level of complexity and difficulty available to students at the moment they need it. Peers can share their information in just this way, but their information may be too limited. Collaborative filtering is one common approach to making information available by a consensus of peers. As search and retrieval technologies become ever more sophisticated so that search is not Boolean, or based on keywords, but is instead leveraged with broad natural language capabilities that cross international boundaries, or semantic search derived from the gist of entire text or video paragraphs or chapters, these resources will become more effectively integrated into distributed interactive learning environments. They will create new challenges for teachers and the educational system and leadership: how to make use of this knowledge outside the curriculum; how to generate challenging topics that generate collaboration among students; how to create novel assessments that accurately measure what students have learned; and how to measure growth in accuracy of opinions and judgments, not just facts. One of the most widely adopted activities in K-12 classrooms using distributed technologies has been the WebQuest. Of the conclusions reached by diverse research studies conducted about WebQuests, the most common findings are positive attitudes and perceptions among students, increases in their motivation, and improved collaboration skills. However, rarely have they found a significant impact on learning and achievement, since these critical attributes of motivation, attitudes, and collaboration are seldom measured as part of classroom skills and objectives. One of the most exciting areas of distributed technology development is occurring with handheld, wireless streaming augmented reality. Carrying cell phones or pdas, students can learn in real time about the geographic areas and specific locations, artifacts, and objects in their surroundings. Using techniques such as geospatial gps systems, local infrared, radio frequency, or wireless signals, or short-range Bluetooth technology, students can engage in real situated learning, interact naturally and manipulate physical objects, or enter shared spaces for collaborative transactions. Creating these designed spaces for effective learning represents yet another challenge for educators. The widespread popularity of flash mobs and swarms of people engaged in fantasy skits suggests real possibilities for historical Divergent Probabilistic Judgments Under Bayesian Learning with Nonadditive Beliefs reenactment, informal science ecological environments, and distributed museums and zoos; but all this remains to be explored. Important Scientific Research and Open Questions The critical research issues are embedded in this whole entry, but the largest issue is how education will be transformed by these new technologies. Will schools and universities diversify to put education into homes and communities, with special social activities performed in group places? Will stratified, lock step grades be replaced by individualized diplomas and certification of accomplishments on a domain-bydomain and level-by-level basis? Will it become more commercial and dominated by large corporations? Will it exploit these new social, distributed, and interactive games and simulations to transform education and make it more effective for all ranges of abilities? Will leadership and policy makers begin to understand the vast potential of distributed technologies, to encourage huge new investments in research that will be evaluated and moved rapidly into fundamental new structures and organizations for education? Cross-References ▶ Advanced Distributed Learning ▶ Asynchronous Learning ▶ Asynchronous Learning Networks ▶ Classroom Management and Motivation ▶ Collaborative Knowledge Building ▶ Collaborative Learning ▶ Collaborative Learning and Critical Thinking ▶ Collaborative Learning Strategies ▶ Collaborative Learning Supported by Digital Media ▶ Collective Development and the Learning Paradox ▶ Collective Learning ▶ Community of Learners ▶ Computer-Supported Collaborative Learning ▶ Discourse in Asynchronous Learning Networks ▶ Distributed Learning ▶ Distributed Technologies ▶ Learning Management System ▶ Learning Object Evolutions in a Distributed Environment ▶ Learning with Collaborative Mobile Technologies ▶ Online Collaborative Learning ▶ Online Learning D ▶ Online Learning with Monte Carlo Methods ▶ Ontology of Learning Objects Repository for Knowledge Sharing ▶ Open Learning ▶ Open Learning Environments ▶ Peer Learning and Assessment ▶ Rapid Collaborative Knowledge Improvement ▶ Zone of Proximal Development References Brusilovsky, P. (1999). Adaptive and intelligent technologies for webbased education. In C. Rollinger., & C. Peylo (Eds), Künstliche Intelligenz (4), Special Issue on Intelligent Systems and Teleteaching (pp. 19–25). http://www2.sis.pitt.edu/~peterb/ papers/KI-review.html. Accessed 28, April 2011. Fletcher, J. D. (2009). Education and training technology in the military. Science, 353(2), 72–79. Herlocker, J. L., Konstan, J. A., & Riedl, J. (2000). Explaining collaborative filtering recommendations. Proceedings of the 2000 ACM conference on Computer supported cooperative work (pp. 241–250). Philadelphia. [doi:10.1145/358916.358995] Landauer, T. K., & Psotka, J. (2000). Simulating text understanding for educational applications with latent semantic analysis: introduction to LSA. Interactive Learning Environments, 8, 73–76. Mayer, R. E., Dow, G. T., & Mayer, S. (2003). Multimedia learning in an interactive self-explaining environment: what works in the design of agent-based microworlds? Journal of Educational Psychology, 95(4), 806–813. Reily, K., Ludford Finnerty, P., & Terveen, L. (2009). Two peers are better than one: aggregating peer reviews for computing assignments is surprisingly accurate. Proceedings of the ACM 2009 international conference on Supporting group work (pp. 10–13). Sanibel Island. [doi:10.1145/1531674.1531692] Divergent Probabilistic Judgments Under Bayesian Learning with Nonadditive Beliefs ALEXANDER ZIMPER School of Economic and Business Sciences, University of the Witwatersrand, Johannesburg, Gauteng, South Africa Synonyms Attitude polarization; Irrational belief persistence; Myside bias 1023 D 1024 D Divergent Probabilistic Judgments Under Bayesian Learning with Nonadditive Beliefs Definitions – A nonadditive probability measure n defined on the measurable space (O, F ) satisfies (1) normalization, i.e., n(O) = 1, nð;Þ ¼ 0 as well as (2) monotonicity, i.e., AB implies n(A)n(B) for all A, B 2 F . – The Choquet expected value of a bounded random variable Y : O !  with respect to a nonadditive probability measure n is defined as the following Riemann integral: Z 0 E½Y ; n ¼ ðnðfo 2 OjY ðoÞ  zgÞ  1Þ dz 1 Z þ1 nðfo 2 OjY ðoÞ  zgÞ dz: þ 0 Theoretical Background Standard Bayesian decision theory in the tradition of Ramsey, de Finetti, and Savage considers a decision maker whose uncertainty is comprehensively described by some additive probability space (m, O, F ). The concept of Bayesian learning refers to the specific situation where the state space O is rich enough to include a data-space – consisting of (possibly infinite) sequences of outcomes of an i.i.d. random process – as well as a parameter space that determines the distribution of this process. New information in terms of new data then gives rise to probabilistic learning in the sense that the decision maker revises his prior estimate of the distribution parameter. The resulting posterior estimate is thereby uniquely determined as the expected value of the random distribution parameter with respect to the probability measure m conditioned on the newly received information. As a consequence, models of Bayesian learning provide a decision-theoretically sound answer to the question of how an agent learns probabilistic judgments under the assumption that sample data is drawn from an i.i.d. process. Moreover, celebrated consistency results on Bayesian estimators (e.g., Doob 1949) ensure that these probabilistic judgments will almost certainly coincide with the true parameter values of standard i.i.d. processes if the learning process incorporates sufficiently many data observations. In spite of the elegance of standard Bayesian models of probabilistic learning, two critical remarks about the descriptive shortcomings of these models are in order. Firstly, several studies in the psychological literature demonstrate that people’s learning behavior may be prone to effects such as “myside bias” or “irrational belief persistence” which may give rise to “attitude polarization” (cf. Baron 2008). The learning behavior elicited in these experiments cannot be explained by standard models of Bayesian learning according to which differences in agents’ probabilistic judgments must decrease rather than increase whenever the agents receive identical information. Secondly, recent decision-theoretic developments suggest that additive probability measures describe subjective uncertainty in a rather unsatisfactory way since they neglect ambiguity attitudes that are relevant to real-life probabilistic judgments. This entry introduces a closed-form model of Bayesian learning that addresses in a unified way both descriptive shortcomings of standard models of Bayesian learning. Our formal approach is based on Choquet Expected Utility (CEU) theory, which considers nonadditive probability measures, i.e., capacities, in order to describe violations of Savage’s (1954) surething principle as elicited by paradoxes of the Ellsberg type (Schmeidler 1986). More specifically, the agent’s estimate about the parameter value is given as the parameter’s Choquet expected value with respect to a conditional neo-additive capacity in the sense of Chateauneuf et al. (2007) according to which an agent’s nonadditive belief about the likelihood of an event is a weighted average of an ambiguous part and an additive part. The key to our model is the existence of several perceivable Bayesian update rules for nonadditive probability measures that may express different psychological attitudes toward the interpretation of new information. More precisely, we consider the so-called full Bayesian as well as the optimistic and the pessimistic update rule (Gilboa and Schmeidler 1993). As explained below, this “indeterminacy” of update rules for nonadditive probability measures is a direct consequence of the violation of Savage’s sure-thing principle. Finally, using this Bayesian learning model with conditional neo-additive capacities, we then analyze the behavior of revised probabilistic judgments of two heterogeneous agents. Two main findings on the possibility of diverging judgments emerge from our formal model: 1. We may observe divergent probabilistic judgments for agents who have identical attitudes with respect to the interpretation of new information but have different initial attitudes with respect to optimism, or pessimism, under ambiguity. Divergent Probabilistic Judgments Under Bayesian Learning with Nonadditive Beliefs 2. We may observe divergent probabilistic judgments in case the agents have identical initial attitudes with respect to optimism, or pessimism, under ambiguity but have different attitudes with respect to the interpretation of new information. Important Scientific Research and Open Questions Bayesian Learning with Additive Probability Measures We present at first a specific closed-form model that will serve as our benchmark model of Bayesian learning with an additive probability measure. Suppose that an agent observes an arbitrary number of independent trials in which a specific outcome – say Heads – occurs with identical probability. Formally, we describe this situation by a probability space (m, O, F ) where p denotes the event in F such that p 2 [0, 1] is H’s true probability, i.e., p ¼ fo 2 Oj~ pðoÞ ¼ pg ~ denotes a random variable with range [0, 1]. where p ~ is given as We assume that the agent’s prior over p a Beta distribution with parameters a, b>0 so that, for all p 2 [0, 1], mðpÞ ¼ Ka;b pa1 ð1  pÞb1 where Ka,b is a normalizing constant. While the agent will never observe any direct information about H’s true probability, he receives in period n sample information Ikn which is formally defined as the event in F such that H has occurred k-times in the n first trials, i.e., Ikn ¼ fo 2 OjIn ðoÞ ¼ kg whereby the random variable In counts the number of occurrences of the outcome H in the first n trials. By our i.i.d. assumption, In is, conditional on the parameter value p, binomially distributed with probabilities     n k m Ikn jp ¼ p ð1  pÞnk for k 2 f0; . . . ; ng: k By Bayes’ rule, we then obtain the following posterior probability that p is the true parameter value conditional on information Ikn D 1025    k  m p \ Ikn   m pjIn ¼ m Ikn   m Ikn jp mðpÞ   ¼R k ½0;1 m In jp mðpÞ dp ¼ Kaþk;bþnk paþk1 ð1  pÞbþnk1 : D The agent’s prior probabilistic judgment, i.e., his prior estimate of the true probability of outcome H, is ~ with respect to the defined as the expected value of p a prior Beta distribution, i.e., E½~ p; m ¼ aþb . Accordingly, the agent’s revised probabilistic judgment in the light of ~ information Ikn is defined as the expected value of p with  respect   to the resulting posterior distribution,   i.e., ~; m jIkn . Since the agent’s posterior m jIkn over p ~ E p is itself a Beta distribution with parameters a+k, b+    aþk ~; m jIkn ¼ aþbþn nk, we have E p or, equivalently,       aþb n k ~; m jIkn ¼ : E ½p E½~ p; m þ aþbþn aþbþn n ð1Þ Suppose now that there are two agents with different subjective probability measures m1 and m2, respectively. Since both agents’ posterior estimates put increasing weight on the commonly observed samplemean nk , the following convergence result for the Bayesian learning model (1) follows immediately. Proposition 1. Consider additive probability measures m1 and m2 such that E½~ p; m1  > E½~ p; m2 . Then the difference in agents’ probabilistic judgments strictly decreases in the number of observations, i.e., for all n,          ~; m1 jIknþ1  E p ~; m2 jIknþ1 < E p ~; m1 jIkn E p    ~; m2 jIkn : E p Updating Nonadditive Probability Measures As a generalization of an additive probability space, we now consider a nonadditive probability space (n, O, F ). Properties of nonadditive probabilities are used in the literature on Choquet expected utility (CEU) theory for formal definitions of, e.g., ambiguity and uncertainty attitudes, pessimism and optimism as well as sensitivity to changes in likelihood. CEU theory has been developed in order to accommodate paradoxes of the Ellsberg type which show that real-life decision makers violate Savage’s sure-thing 1026 D Divergent Probabilistic Judgments Under Bayesian Learning with Nonadditive Beliefs principle. The abandoning of the sure-thing principle implies that there exist several perceivable Bayesian update rules for nonadditive probability measures. To see this, define the Savage-act fBh: O!X such that ( f ðoÞ for o 2 B fB hðoÞ ¼ hðoÞ for o 2 :B where B is some nonempty event. That is, the act fBh gives the same consequences as the act f in all states belonging to event B and it gives the same consequences as the act h in all states outside of event B. Recall that Savage’s sure-thing principle states that, for all acts f, g, h, h0 and all events B 2 F , fB h gB h implies fB h0 gB h0 : ð2Þ Let us now interpret event B as new information received by the agent. The sure-thing principle then implies a straightforward way for deriving ex post preferences B , conditional on the new information B, from the agent’s original preferences over Savageacts. Namely, we have f B g if and only if fB h gB h for any h: ð3Þ For a subjective EU decision maker (3) implies the familiar definition of a conditional additive probability measure, i.e., for all A, B 2 F such that m(B)>0, by mðAjBÞ ¼ mðA \ BÞ : mðBÞ In case the sure-thing principle does not hold, the specification of act h in (3) is no longer arbitrary. For CEU preferences, there therefore exist several possibilities of deriving ex post preferences from ex ante preferences. Let us at first consider conditional CEU preferences satisfying, for all acts f, g, f B g if and only if fB h gB h ð4Þ where h is the so-called conditional certainty equivalent of g, i.e., given information B the agent is indifferent between the act g and the act h that gives in every state of B the same consequence. The corresponding Bayesian update rule for the nonadditive beliefs of a CEU decision maker is the so-called full Bayesian update rule which is given by nFB ðAjBÞ ¼ nðA \ BÞ ; nðA \ BÞ þ 1  nðA [ :BÞ ð5Þ where nFB (A|B) denotes the conditional capacity for event A 2 F given information B 2 F . In addition to the full Bayesian update rule, we also consider the optimistic and the pessimistic update rule as introduced by Gilboa and Schmeidler (1993). For the so-called optimistic update rule, h in (4) is the constant act that gives in every state the worst possible consequence so that the impossible event NOT B becomes associated with the worst outcome possible. As corresponding optimistic Bayesian update rule for conditional beliefs of CEU decision makers, we obtain nopt ðAjBÞ ¼ nðA \ BÞ : nðBÞ ð6Þ For the pessimistic update rule, h is the constant act associating with the impossible event the best outcome possible. The corresponding pessimistic Bayesian update rule for CEU decision makers is npess ðAjBÞ ¼ nðA [ :BÞ  nð:BÞ : 1  nð:BÞ ð7Þ Bayesian Learning with Neo-additive Probabilities Let us now formally link the updating of nonadditive probabilities to Bayesian learning behavior. Our own approach focuses thereby on nonadditive probability measures that are defined as neo-additive capacities in the sense of Chateauneuf et al. (2007). Definition (Neo-additive Capacities). Given the measurable space (O, F ) the neo-additive capacity, n, is defined, for some d, l 2 [0, 1] by nðAÞ ¼ d l þ ð1  dÞ mðAÞ ð8Þ for all A 2 F such that A 2 = f;; Og. Neo-additive capacities can be interpreted as nonadditive beliefs that stand for deviations from additive beliefs such that a parameter d, the degree of ambiguity, measures the lack of confidence the decision maker has in some subjective additive probability distribution m. The second parameter l is typically interpreted as the degree of optimism under ambiguity whereby l = 1 and l = 0 correspond to extreme optimism and extreme pessimism, respectively. The Choquet expected value of the random variable ~ with respect to a neo-additive capacity gives the p Divergent Probabilistic Judgments Under Bayesian Learning with Nonadditive Beliefs following prior (Choquet) estimate of the true parameter value   ~ðoÞ þ ð1  lÞ min p ~ðoÞ E½~ p; n ¼ d l max p o2O o2O þ ð1  dÞE½~ p; m ¼ dl þ ð1  dÞE½~ p; m: The following observation characterizes posterior (Choquet) estimates in the light of sample information Ikn for the different update rules discussed in the previous section. Observation. Contingent on the applied update rule the agent’s posterior estimate conditional on information Ikn is given as follows. 1. Full Bayesian updating    FB  k     FB ~; n jIn ¼ dFB ~; m jIkn E p E p I k l þ 1  dI k n n whereby dFB Ik ¼ n d  : d þ ð1  dÞ m Ikn 2. Optimistic Bayesian updating       opt  k  opt ~;n jIn ¼ dopt ~; m jIkn E p E p k þ 1d k I I n n whereby opt dI k ¼ n d l  : d l þ ð1  dÞ m Ikn 3. Pessimistic Bayesian updating      pess  k   pess ~;n ~; m jIkn E p jIn ¼ 1  dIk E p n whereby pess dIk ¼ n d ð1  lÞ  : d ð1  lÞ þ ð1  dÞ m Ikn Observe that the agent’s posterior estimates are given as a weighted   average  of the additive benchmark estima~; m jIkn , as given by (1), and of the numbers l tor E p (for full Bayesian learning), 1 (for optimistic Bayesian learning), and 0 (for pessimistic Bayesian learning), respectively. Thus, in contrast to the benchmark case of the standard model of Bayesian learning, the above learning rules exhibit an additional bias that reflects the agent’s ambiguity attitudes. D 1027 Divergent Probabilistic Judgments We are now ready to state our main result which identifies conditions such that Bayesian learning with neoadditive beliefs may result in divergent probabilistic judgments in the following sense. Definition (Divergent Probabilistic Judgments). We say that the difference in the agents’ probabilistic judgments strictly increases (in the number of observations) iff, for all n,          ~; n1 jIknþ1  E p ~; n2 jIknþ1 > E p ~; n1 jIkn E p    ~; n2 jIkn E p ð9Þ whereby       ~; n2 jIkn : ~; n1 jIkn  E p E p ð10Þ For the sake of expositional clarity, we restrict attention to the case in which differences in initial beliefs of agents can only be due to their respective optimism parameters li , i 2 {1, 2}, under ambiguity whereby we assume that agent 1 is more optimistic than agent 2 ensuring (10). Proposition 2. Let the neo-additive probability measures n1 and n2 satisfy di = d > 0, mi = m for i 2 {1, 2} as well as l1 l2. 1. Suppose that both agents use the full Bayesian update rule. Then the difference in the agents’ probabilistic judgments strictly increases if and only if l2. 2. Suppose that agent 1 uses the optimistic whereas agent 2 uses the pessimistic Bayesian update rule. Then the difference in the agents’ probabilistic judgments strictly increases if and only if l1 l2. By proposition 2, our stylized model of Bayesian learning formally accommodates two alternative scenarios of diverging probabilistic judgments. In a first scenario, divergence arises because of different personal attitudes toward the resolution of ambiguity. In a second scenario, divergence corresponds to personal attitudes toward the interpretation of information. While existing psychological studies provide empirical evidence for the phenomenon of diverging probabilistic judgments, they do not differentiate between these two alternative explanations for the phenomenon. It would therefore be interesting to gather more empirical evidence on updating and learning with nonadditive beliefs. D 1028 D Divergent Thinking and Learning Cross-References ▶ Attitudes – Formation and Change ▶ Bayesian Learning ▶ Belief Formation ▶ Belief-Based Learning Models ▶ Bounded Learning – Rational Learning ▶ Divergent Thinking and Learning References Baron, J. (2008). Thinking and deciding. New York/Melbourne/ Madrid: Cambridge University Press. Chateauneuf, A., Eichberger, J., & Grant, S. (2007). Choice under uncertainty with the best and worst in mind: Neo-additive capacities. Journal of Economic Theory, 127, 538–567. Doob, J. L. (1949). Application of the theory of martingales. In Colloques Internationaux du Centre National de la Recherche Scientifique (Ed.), Le Calcul des Probabilite’s et ses Applications (Vol. 13, pp. 23–27). Paris: Author. Gilboa, I., & Schmeidler, D. (1993). Updating ambiguous beliefs. Journal of Economic Theory, 59, 33–49. Savage, L. J. (1954). The foundations of statistics. New York/London/ Sydney: Wiley. Schmeidler, D. (1986). Integral representation without additivity. Proceedings of the American Mathematical Society, 97, 255–261. Divergent Thinking and Learning OLGA M. RAZUMNIKOVA Cognitive Physiology Lab, Department of Pedagogy and Psychology, Research Institute of Physiology SB RAMS, Novosibirsk State Technical University, Novosibirsk, Russia Synonyms Convergent thinking Definition Divergent thinking is a thought process used to generate diverse and numerous ideas on some mental task, implying that not only one solution may be correct. The term divergent thinking is used in the sciences of learning and cognition to designate a psychological construct that accounts for the specific form of human thinking. The goal of divergent thinking is to generate many different ideas about a topic in a short period of time. It involves breaking a topic down into its various component parts in order to gain insight about the various aspects of the topic. Theoretical Background The concept of divergent thinking was developed by psychologist J. P. Guilford, who saw it as a major component of creativity and associated it with four main characteristics (Guilford 1967): ● Fluency, the ability to rapidly produce a large num- ber of ideas to problem solution ● Flexibility, the ability to generate multiple problem solutions from different semantic categories ● Originality, the ability to generate the number of unique or unusual ideas ● Elaboration, the ability to develop a detail of the decision with their association at a final embodiment of the general idea The elements that make up a divergent thinking are lateral thinking, deducting and inducting, identifying, synthesizing, differentiating, critical thinking, and problem solving. The choice of the final decision can unconsciously occur on the basis of insight. Heuristics strategy is a relevant method for effective divergent thinking. It is possible to also use another strategy when the critical judgment of a set of spontaneously generating ideas is used. Then along with divergent thinking the creative problem solving can demand convergent thinking, intrinsic motivation, persistence, openness to experience, willingness to take risks, functional nonconformity, and others. Therefore many researchers consider that divergent thinking only is not enough for achievement of creative efficiency and it is not an equivalent of creativity. However divergent thinking tests are often used, though of course they really just estimate the potential for creative thought. Early versions of divergent thinking tasks focused on ideational processes of verbal and figural content. More recently efforts have been made to design specific divergent thinking tests relevant to various domains, including art, science, management, and engineering or complex and real world issues and problem-finding skills in children and students (e.g., Basadur et al. 2000; Scott et al. 2004). A number of studies reported more valid divergent thinking scores when explicit instructions to be original or creative were given to the examinees and when administration Divergent Thinking and Learning of the tests was untimed. There are positive and statistically significant relationships between various divergent thinking test scores and reasonably acceptable nontest indices of creative behavior and achievement. Both creativity and divergent thinking, as assessed through open-ended tests such as consequences, incomplete figures, and alternative uses, where responses are scored for fluency (number of responses), flexibility (category shifts in responses), originality (uniqueness of response), and elaboration (refinement of responses), do represent a distinct capacity contributing to many forms of creative performance. Based on these characteristics of divergent thinking, it can be discriminated by different paths of learning to this kind of thinking. The first level is learning to fluency in thinking – to train students’ thinking speed so that they may propose more concepts and more answers in the possible shortest time. The second level is cognitive flexibility training to accommodate themselves to different changes. The third level is novelty – to train students’ ability to boldly break away from conventions and to bravely develop their creative spirit. The capacity to apply ideas creatively in new contexts, referred to as the ability to “transfer” knowledge, requires that learners have opportunities to actively develop their own representations of information to convert it to a usable form. As in divergent thinking, different cognitive processes are involved, such as information gathering, comparing, observing, hypothesizing, associating, classifying, generalizing, interpreting, conceptual combination synthesizing ideas, idea evaluation, and implementation planning, thinking development can go different ways and include training programs such as: ● Speed up one’s mental operations and help utilize all of one’s resources. ● Assist the assimilation of new information, develop one’s susceptibility to new information, and encourage one’s conscious curiosity facilitating in this way the spotting of relationship among distant areas of interest and operation. ● Eliminate a hindrance to the free functioning of mental operations and associative mechanisms. ● Extend the area of creative processes for the effective promotion of creative power. D With the identification of divergent thinking as a distinct capacity making a unique contribution to creative thought, scholars interested in the development of creativity began to apply divergent thinking tasks in the design of training. Divergent thinking learning can beginning with discussion on the value of a task performance strategy, such as heuristic strategy in identifying alternative uses, and then providing practice in the application of this strategy. Divergent thinking tasks intended to elicit alternative uses, story completion, and question generation provided the basis for training. Novelty of ideas was measured based on different quality attributes, such as originality, effectiveness, and implementability. Many techniques will make the generation of original ideas easier so let us start learning about them now: random word and picture, false rules, challenge facts, analogies, wishful thinking, and others. A specific tool designed to enhance divergent thinking in groups is brainstorming developed by A. Osborn as a method by which a group tries to find a solution for a specific problem by amassing a list of ideas spontaneously contributed by its members. Osborn’s rules for brainstorming sessions are as follows: ● Judgment of ideas is not allowed (this comes later). ● Outlandish ideas are encouraged (these can be scaled back later). ● A large quantity of ideas is preferred (quantity leads to quality). ● Members should build on one another’s ideas (members should suggest idea improvement). Brainstorming is used extensively as a technique for group idea generation in marketing strategy, research and development procedures, written documents and articles, engineering components, government policies, management methods, and company structure and policy. The other methodic of divergent thinking training to aid effective discussions and individual thinking by systematically promoting different thinking states of mind was a tool The Six Thinking Hats developed by E. de Bono. Using different states of mind, de Bono’s methodic gives a systematic way of considering the subject from different perspectives and in doing so to be more complete and effective. Another form of brainstorming development is Gordon’s Sinectics, which means a connection of different elements of the 1029 D 1030 D Divergent Thinking and Learning topic. The divergent thinking in engineering activity can develop on a basis of G.C. Altshuller’s TRIZ. Especially when a knowledge domain is complex and fraught with ill-structured information, activelearning strategies is demonstrably more effective than traditional linear teaching. Ideation can be training by modeling divergent thinking in students and clear-cut reinforcement for their successful divergent thinking. Important Scientific Research and Open Questions Using different neurophysiological measurement methods, such as EEG, fMRI, and PET, neuroscientific studies have yielded evidence of possible brain correlates underlying creative and divergent thinking (e.g., Fink et al. 2007; Howard-Jones et al. 2005; Razumnikova et al. 2007). Analysis of the coherence of an EEG, with its multichannel leads from different parts of the cortex, makes it possible to clarify the correspondence of originality in problem solving with the divergent nature of the interaction between different regions of the cortex. It was found that creative people are distinguished by their high ability to change the frequency-spatial organization of cortical activity, specific forms of which are dependent on sex, intelligence, and other individual characteristics, especially the person’s emotional sensitivity (e.g., Razumnikova et al. 2007). Defining the personality profile of an individual makes it possible to predict his success in divergent thinking and creative problem solving that require specific thought strategies in professional work. The interaction of the left hemispheric frontal with the parietal part of the attention system, which is more pronounced in women, may indicate that conscious control of such activity is more important for women than for men. Consequently, increasing the significance attributed to intellectual abilities in modern professional work, as well as the prestige of innovative forms of behavior, should be seen as a most significant step toward increasing the creativity of women. The specific nature of the interaction between the anterior and posterior cortical regions is just as necessary a condition for organizing different thinking strategies as is the interaction of the two hemispheres, and this interaction is related to the type of problem being solved and to individual distinctions in the regulation of functional brain activity (figure). Clearly expressed Sex differences in: Arousal Reticular – thalamic –cortical interaction (6-10 Hz rhythms) Divergent thinking ion n ers tuitio n trav Ex on - i ti nsa Se Emotional activation ism Neurotic Cortical - limbic interaction g in nk hi (4-6 and 10-30 Hz rhythms) Feeling -t Int ell ige nc e Strategy of idea search (information selection) Cortico-cortical interactions: Hemispheric asymmetry and frontal-TPO interaction (4-30 Hz rhythms) Divergent Thinking and Learning. Fig. 1 TPO-temporoparietal-occipital intellectual and character traits are formed as the result of a specific individual mosaic of both activating and inhibitory interactions of cortical and subcortical structures, establishing conditions for future thinking strategies: divergent/convergent, rational/irrational, logical/intuitive, and verbal/figural. A great deal of evidence has also been developed for the weakening of functional asymmetry, as the reserves of the right hemisphere are drawn upon for divergent thinking. Right-hemisphere dominance in creative activities is linked to the fact that its functions include not only visualspatial, but also verbal processes, such as the construction of metaphors or semantic operations that require a wide net of associations (Howard-Jones et al. 2005). The EEG study revealed that the generation of original ideas was associated with alpha synchronization in frontal brain regions and with a widespread pattern of alpha synchronization over parietal cortical regions (Fink et al. 2007). Attention to alpha oscillations (7–13 Hz) in the studies of EEG correlates of creativity is based on the fact that the power of the alpha rhythm is an indicator of cortical activation, a low level of which, according to C. Martindale, defines the “defocused” state of attention required for effective creative thinking. Considering the functional significance of the separate brain structures in divergent thinking, it is worth Dogmatism mentioning that it is not a single necessary and sufficient area of the brain that ensures the final result, but rather a topographically extensive neural network, which dynamically changes its characteristics depending on the stage of the creative process or its character. Ideas about the interaction of the anterior and posterior cortical areas during a spontaneous or deliberative search for a creative solution to a problem seem promising for further study of the patterns of the neurobiological bases of creative abilities, including effective divergent thinking. The parietal system, in this case, represents the “searching” part of the creative process and provides the generation of multiple ideas through a variety of associations of visual, auditory, and symbolic representations. The frontal system performs a “critical/initiating” function and, according to individual goals and interests, selects ideas received from the parietal system by developing those that are acceptable and suppressing those that do not seem necessary. Cross-References ▶ Brainstorming and Learning ▶ Creativity and Learning Resources ▶ Creativity, Problem Solving, and Learning ▶ Flexibility in Learning and Problem Solving ▶ Heuristics and Problem Solving ▶ Nature of Creativity References Basadur, M., Runco, M. F., & Vega, L. (2000). Understanding how creative and thinking skills, attitudes and behaviors work together: A casual process model. The Journal of Creative Behavior, 34, 77–100. Fink, A., Benedek, M., Grabner, R. H., Staudt, B., & Neubauer, A. C. (2007). Creativity meets neuroscience: Experimental tasks for the neuroscientific study of creative thinking. Methods, 42, 68–76. Guilford, J. P. (1967). The nature of human intelligence. New York: McGraw-Hill. Howard-Jones, P. A., Blakemore, S. J., Samuel, E. A., Summers, I. R., & Glaxton, G. (2005). Semantic divergence and creative story generation: An fMRI investigation. Cognitive Brain Research, 25, 240–250. Razumnikova, O. M., Volf, N. V., & Tarasova, I. V. (2007). Gender differences in creativity: A psychophysiological study. In C. Martindale, V. Petrov, & L. Dorfman (Eds.), Aesthetics and innovation (pp. 445–468). Newcastle: Cambridge Scholars Press. Scott, G., Leritz, L. E., & Mumford, M. D. (2004). The effectiveness of creativity training: A quantitative review. Creativity Research Journal, 16, 361–388. D 1031 Diversive Exploration ▶ Curiosity and Exploration ▶ Play, Exploration, and Learning D Divided Attention ▶ Dual-Task Performance in Motor Learning Division of Labor ▶ Altruistic Behavior and Cognitive Specialization in Animal Communities Dogmatism ADAM BROWN School of Education, Elementary Education, St. Bonaventure University, St. Bonaventure, NY, USA Synonyms Belief formation; Belief system; Open- and Closedmindedness Definition The word dogmatism comes from the Greek word for dogma “dόgma,” which means that which seems to one, opinion, or belief. Dogmas are commonly described in religion, in which they are the fundamental tenets and beliefs of that religion. Dogmatism can also relate to how readily one receives novel information. Those who are open to new information are considered to be low in terms of dogmatism and those who are typically more closed-minded are higher in terms of dogmatism. Dogmatism is defined by Rokeach as “a relatively closed cognitive organization of beliefs about reality focused around a central set of beliefs about absolute authority which, in turn, provides a framework for patterns of 1032 D Dogmatism and Learning intolerance and qualified tolerance toward others” (Rokeach 1954, p. 195). It is further defined as “positiveness in assertion of opinion especially when unwarranted or arrogant” (Merriam-Webster Dictionary). Theoretical Background Milton Rokeach imparts a seminal contribution to the subject of dogmatism. According to Rokeach (1960), people’s schemas or cognitive structures are separated into two well-organized belief systems: the belief system and disbelief system. A belief system is a working schema of all the beliefs an individual accepts as truthful. A disbelief system is constructed of a cluster of subsystems that an individual would deem to be false. Rokeach describes dogmatism as a rather narrow arrangement of beliefs and disbeliefs about reality, which provides a framework for acceptance or rejection of the beliefs of others. The separate systems of belief and disbelief work to provide individuals with a means to organize their interpretations of truth and falsehood. These systems can also be described as open or closed. “The closed nature of the belief systems of individuals high in dogmatism can be observed in their tendency to compartmentalize and isolate their beliefs and disbeliefs, whereas the more open belief systems of individuals low in dogmatism can be observed in their readiness to make connections between disparate beliefs” (Davies 1998, p. 456). Isolation occurs when two beliefs are not seen as interrelated and, hence, no communication between the two subsystems takes place. Important Scientific Research and Open Questions Dogmatism seems to be closely tied to environmental factors such as parental upbringing (Lesser 1985), moral reasoning (Nichols and Stults 1985), and personality characteristics such as tender mindedness (Rokeach and Hanley 1956). However, there also seem to be several cognitive aspects to dogmatism (e.g., Rokeach et al. 1955). Throughout the 1950s and 1960s, there was considerable investigation of these aspects of the construct (e.g., Long and Ziller 1965). Dogmatism is a topic that has regained interest as of late. This newfound popularity is due partially to its interrelatedness to contemporary areas of research and interest (e.g., conflict resolution, behavior problems in school, etc.). Dogmatism has implications for research in the area of cognitive functioning as well. Cross-References ▶ Dogmatism and Learning References Davies, M. F. (1998). Dogmatism and belief formation: Output interference in the processing of supporting and contradictory cognitions. Journal of Personality and Social Psychology, 75, 456–466. Lesser, H. (1985). The socialization of authoritarianism in children. The High School Journal, 68(3), 162–166. Long, B., & Ziller, R. (1965). Dogmatism and predecisional information search. The Journal of Applied Psychology, 49, 376–378. Nichols, D. P., & Stults, D. M. (1985). Moral reasoning: Defining issues in open and closed belief systems. The Journal of Social Psychology, 125, 535–536. Rokeach, M. (1954). The nature and meaning of dogmatism. Psychological Review, 61, 191–204. Rokeach, M. (1960). The open and closed mind. New York: Basic Books. Rokeach, M., & Hanley, C. (1956). Care and carelessness in psychology. Psychological Bulletin, 53, 183–186. Further Reading Dogmatism (2010) In Wikipedia. Retrieved 20 Apr 2010, from http:// en.wikipedia.org/wiki/Dogmatism Rokeach, M., McGovney, W., & Denny, M. (1955). A distinction between dogmatic and rigid thinking. Journal of Abnormal and Social Psychology, 51, 87–93. Dogmatism and Learning ADAM BROWN, ANNA PRUDENTE School of Education, Elementary Education, St. Bonaventure University, St. Bonaventure, NY, USA Synonyms Belief formation; Belief system; Open- and closedmindedness Definition Dogmatism is defined by Rokeach (1954) as “a relatively closed cognitive organization of beliefs and disbeliefs about reality, organized around a central set of beliefs about absolute authority which, in turn, provide a framework for patterns of intolerance and qualified tolerance toward others” (p. 195). The more openminded an individual is, the lower that individual is said to be in terms of dogmatism. Conversely, more closed-minded individuals are higher in dogmatism. Dogmatism and Learning Learning is “the process by which changes in behavior arise as a result of experience interacting with the world” (Gluck et al. 2008, p. 2). It is “moving information from working memory – where we actively think about it – to long-term memory, where information is stored indefinitely” (Martinez 2010, p. 62). We do not learn everything that we think about; rather, we learn only a small portion of the great bulk of information that is presented to us. To learn information, we must first perceive incoming information, consciously attend to that information, and move it from temporary working memory to enduring long-term memory. Theoretical Background According to Rokeach (1960), people’s schemas or cognitive structures are separated into two well-organized belief systems: the belief system and disbelief system. A belief system is a working schema of all the beliefs an individual accepts as truthful. A disbelief system is constructed of a cluster of subsystems that an individual would deem to be false. The degree of dogmatism one exhibits determines their belief systems and subsequently determines one’s style and capability of learning. Highly dogmatic individuals tend to isolate their beliefs and disbeliefs with little room for integrating new information, while individuals with low dogmatism are more open to the possibilities and potential of new information. Working memory is the system for temporary maintenance and manipulation of information. One’s working memory allows for the playing out of complex cognitive actions, such as comprehension, reasoning, and learning (Baddeley 1994). The capacity of working memory differs from individual to individual, with each of us having a restricted amount of attentional resources. This availability of working memory space is unique for each of us (Tirre and Pena 1992). It is with consideration to working memory capacity that the relationship between dogmatism and learning comes into play. Let us consider the phenomenon of belief formation, which is, in essence, synonymous with learning. It is contended that general rules exist which impact cognitive processing as it is related to dogmatism and belief formation. When an individual is presented with new information, that information is utilized to form a provisional belief. Whether or not that provisional belief is substantiated is determined by D the incorporation of supporting and/or contradictory information. Highly dogmatic individuals tend to seek out supporting information at the expense of contradictory information; this is in an effort to strengthen their tentative belief, granting it the support in their eyes necessary to strengthen and/or confirm it (Davies 1998). When one has limited availability for the attentional resources necessary to integrate any new contradictory information, that is, limited working memory space for considering alternative arguments, the individual tends to be more dogmatic. The more evidence one is presented with, the more confident that individual is in his or her belief (Kelly 2008). Strength of argument impacts belief formation and subsequently, learning. Individuals low in dogmatism tend to be influenced by an argument regardless of the expertise displayed by the individual imparting the information. The low-dogmatic individuals are persuaded by the strength of the argument, not by the individual providing that information. These individuals are able to reason through the information and determine the value of it with minimal consideration granted to the source. Conversely, highly dogmatic individuals are inclined to believe an argument based on source expertise alone. That is, if the source appears to be a legitimate informant of the argument at hand, highly dogmatic individuals deem the validity of the argument therefore strengthened, regardless of the actual information imparted (DeBono and Klein 1993). Important Scientific Research and Open Questions Dogmatism is a topic that has regained interest as of late. This newfound popularity is due partially to its interrelatedness to contemporary areas of research and interest (e.g., conflict resolution, behavior problems in school, cognitive functioning, etc.). Many advocate the necessity of developing critical thinking skills, and the reasons for this promotion are extensive. Dogmatism is not typically incorporated into the idea of critical thinking, yet its relationship to learning warrants consideration. If, as a group, we simply accept what is told to us as truth, without considering the possibility that the information is potentially fallacy, we run the risk of being both uninformed and easily manipulated (White 2006). In doing so, we perceive all new incoming information as 1033 D 1034 D Domain of Movement fact without any justification for this belief other than the simple reality that it was told to us. Thus, belief could in essence turn into a mechanism of power and influence if individuals do not critically evaluate for themselves. If we are able to critically think about a situation or proposition, considering both sides, however, and replace fallacies with progressive truths, we are able to enhance our own learning as well as the social good (Harrigan 2008). Tolerance is advanced when one is willing to consider new information as possible truth (Leone 1989). If we are willing to remove ourselves from our own inflexible belief systems and consider the worth in assessing alternative views, we are exhibiting tolerance and opening ourselves up to the potential benefits of accepting new information. Essentially, if we are willing to open our minds and be less dogmatic, we become more accepting of the notion that other actions and beliefs are perhaps justifiable. It is arguable that those who are very willing to readily accept new information as truths open the door to manipulation and may become the clay with which others can easily mold. On the other hand, however, it is erroneous to doubt simply for doubt’s sake alone. Doubt must be grounded in logic, facts, and evidence, and simply opting to disbelieve for the sake of disbelieving is in itself a form of belief – it becomes one’s mantra – and makes one more dogmatic. Therefore, it is debatable that a balance must be achieved in which the dichotomy of acceptance versus rejection of arguments is weighed. Cross-References ▶ Belief Formation ▶ Dogmatism and Belief Formation ▶ Learning About Learning ▶ Learning and Thinking DeBono, K. G., & Klein, C. (1993). Source expertise and persuasion: The moderating role of recipient dogmatism. Personality and Social Psychology Bulletin, 19(2), 167–173. Gluck, M. A., Mercado, E., & Myers, C. E. (2008). Learning and memory: From brain to behavior. New York: Worth. Harrigan, C. (2008). Against dogmatism: A continued defense of switch side debate. Contemporary Argumentation and Debate, 29, 37–66. Kelly, T. (2008). Disagreement, dogmatism, and belief polarization. The Journal of Philosophy, 105(10), 243–252. Leone, C. (1989). Self-generated attitude: Some effects of thought and dogmatism on attitude polarization. Personality and Individual Differences, 10, 243–252. Martinez, M. E. (2010). Learning and cognition: The design of the mind. New Jersey: Merrill. Rokeach, M. (1954). The nature and meaning of dogmatism. Psychological Review, 61, 194–204. Rokeach, M. (1960). The open and closed mind. New York: Basic Books. Tirre, W., & Pena, C. (1992). Investigation of functional working memory in the reading span test. Journal of Educational Psychology, 84, 462–472. White, R. (2006). Problems for dogmatism. Philosophical Studies, 131, 525–557. Domain of Movement ▶ Impaired Learning Multidimensional Motor Sequence Donation ▶ Cognitive Aspects Nonhuman Primates of Prosocial Behavior in References Baddeley, A. (1994). Working memory: The interface between memory and cognition. In D. L. Schacter & E. Tulving (Eds.), Memory systems 1994 (pp. 351–367). Cambridge, MA: MIT Press. Davies, M. F. (1998). Dogmatism and belief formation: Output interference in the processing of supporting and contradictory cognitions. Journal of Personality and Social Psychology, 75(2), 456–466. Double-Blind Experiment Is an experimental procedure in which neither the subjects of the experiment nor the persons administering it know the critical aspects of the experiment; a doubleblind procedure is used to guard against experimenter bias, the “Clever Hans effect,” and placebo effects. Double-Loop Learning Double-Loop Learning MOHAMED AMINE CHATTI1, MATTHIAS JARKE1, ULRIK SCHROEDER2 1 Informatik 5, RWTH Aachen University, Aachen, Germany 2 Lehr- und Forschungsgebiet Informatik 9, RWTH Aachen University, Aachen, Germany Definition Double-loop learning is a core concept in organizational learning. It is defined in (Argyris and Schön 1996, p. 21) as “learning that results in a change in the values of theory-in-use, as well as in its strategies and assumptions.” Theoretical Background The concept of double-loop learning was introduced by Argyris and Schön (1978) within an organizational learning context. A starting point for Argyris and Schön’s double-loop learning is the notion of theory of action. A theory of action includes “strategies of actions, the values that govern the choice of strategies and the assumptions in which they are based” (Argyris and Schön 1996, p. 13). The authors make a distinction between two types of theories of action: espoused theory and theory-in-use. They write: " Theory of action, whether it applies to organizations or individuals, may take two different forms. By “espoused theory” we mean the theory of action which is advanced to explain or justify a given pattern of activity. By “theory-in-use” we mean the theory of action which is implicit in the performance of that pattern of activity. (p. 13) An organization/individual theory-in-use includes norms for corporate/individual performance, strategies for achieving values, and assumptions that bind strategies and values together. Argyris and Schön (1978, 1996) draw upon the notion of theory-in-use to present their view of organizational learning. According to the authors, organizational learning is the process of detecting and correcting errors. It takes account of the interplay between the actions and interactions of individuals and higher-level organizational entities such as D departments, divisions, or groups of managers. Each member of an organization constructs his or her own representation of the theory-in-use of the whole. Organizational learning then occurs when individuals within an organization experience a problem (error detection) and work on solving this problem (error correction). Error correction happens through a continuous process of organizational inquiry, where everyone in the organizational environment can inquire, test, compare, and adjust his or her theoryin-use, which is a private image of the organizational theory-in-use. Effective organizational inquiry then leads to a reframing of one’s theory-in-use, thereby changing the organizational theory-in-use. Argyris (1991) asserts that most people define learning too narrowly as mere “problem solving,” so they focus on identifying and correcting errors in the external environment. This is what Argyris calls singleloop learning. But, in the words of Argyris: " if learning is to persist, managers and employees must also look inward. They need to reflect critically on their own behavior, identify the ways they often inadvertently contribute to the organization’s problems, and then change how they act. (p. 99) This deeper form of learning is what Argyris terms double-loop learning. Argyris further claims that highly skilled professionals are frequently very good at singleloop learning: " After all, they have spent much of their lives acquiring academic credentials, mastering one or a number of intellectual disciplines, and applying those disciplines to solve real-world problems. But ironically, this very fact helps explain why professionals are often so bad at double-loop learning. Put simply, because many professionals are almost always successful at what they do, they rarely experience failure. And because they have rarely failed, they have never learned how to learn from failure. So whenever their single-loop learning strategies go wrong, they become defensive, screen out criticism, and put the “blame” on anyone and everyone but themselves. In short, their ability to learn shuts down precisely at the moment they need it the most. (p. 99) To put it simply, single-loop learning differs from double-loop learning in that the former aims at efficiency (i.e., doing the things right) and the latter 1035 D 1036 D Double-Loop Learning focuses on effectiveness (i.e., doing the right things). Argyris and Schön (1996) define single-loop learning as “learning that changes strategies of actions or assumptions underlying strategies in ways that leave the values of a theory of action unchanged” (p. 20), and double-loop learning as “learning that results in a change in the values of theory-in-use, as well as in its strategies and assumptions” (p. 21). As depicted in Fig. 1, single-loop learning refers to a single feedback loop that connects detected error to organizational strategies of action and their underlying assumptions. These strategies and assumptions are modified, but the norms and values of the theory-inuse remain unchanged. In double-loop learning, by contrast, correction of error requires inquiry through which organizational norms and values themselves are modified. Double-loop learning refers to the two feedback loops that connect detected error not only to strategies and assumptions but also to norms and values of the theory-in-use. Strategies and assumptions may then change concurrently with, or as a consequence of, change in values of the theory-inuse (Argyris and Schön 1996). In other words, Argyris and Schön differentiate between learning that does not change the underlying mental models of the learners but merely revises their application scenarios (single-loop), and learning which does affect such changes (double-loop). Double-loop learning starts from a learner’s mental model (i.e., theory-in-use) defined by base norms, values, strategies, and assumptions, and suggests critical reflection in order to challenge, invalidate, or confirm the used theory-of-use. Double-loop learning also encourages genuine inquiry into and testing of one’s actions and requires self-criticism, that is, the capacity for questioning one’s theory-in-use and openness to change the same as a function of learning. The result of reflection, inquiry, testing, and self-criticism would then be a reframing of one’s norms and values, and a restructuring of one’s strategies and assumptions, according to the new settings. To explain the distinction between single- and double-loop learning, Argyris and Schön (1996) give the example of the behavior of a heating or cooling system governed by a thermostat: " In an analogy to single-loop learning, the system changes the values of certain variables (for example, the opening or closing of an air valve) in order to keep temperature within the limits of s setting. Double-loop learning is analogous to the process by which a change in the setting induces the system to maintain temperature within the range specified by a new setting. (p. 21) Argyris and Schön (1996) stress that double-loop learning is essential for productive organizational learning within rapidly changing and uncertain settings. As they put it “long-term effectiveness depends on the possibility of double-loop learning” (p. 96) and “values that govern double-loop organizational inquiry are foundational to sustained productive organizational learning” (p. 246). Argyris and Schön (1996) further distinguish between Model I and Model II theories-in-use as two Theories-in-use Norms/ values Action strategies/ assumptions Consequences /outcomes of action Single-loop learning Double-loop learning Double-Loop Learning. Fig. 1 Single- and double-loop learning Error detection/ correction Dreaming: Memory Consolidation and Learning models of theories-in-use that either inhibit or enhance double-loop learning. The authors point out that when individuals deal with issues that are embarrassing or threatening, their reasoning and action conform to Model I theories-in-use. Theoriesin-use consistent with Model I are often triggered by defensiveness, competitiveness, and mutual mistrust and are shaped by an implicit willingness to protect oneself, avoid embarrassment, maintain control, maximize winning, and minimize losing. Such Model I theories-in-use often lead to defensive reactions, which, in turn, reduce the likelihood that individuals will engage in the kind of organizational inquiry that leads to productive learning outcomes. Model I theories-in-use are thus said to inhibit double-loop learning. Model II theories-in-use, by contrast, value processes that foster double-loop learning, such as sharing control, participation in design and implementation of action, and encouraging frequent organizational inquiry as well as public testing of actions and their underlying assumptions. The consequence of using Model II theories-in-use should then be “an enhancement of the conditions for double-loop learning in organizational inquiry where assumptions and norms central to organizational theory-in-use can be surfaced, publicly confronted, tested, and restructured” (Argyris and Schön 1996, p. 119). Model I and Model II theories-in-use build the base for a distinction between O-I and O-II organization learning systems. According to Argyris and Schön (1996), Model I theories-in-use lead to an organization with an O-I learning system, which is “dominated by organizational defensive routines” (p. 106) and “highly unlikely to learn to alter its governing variables, norms, and assumptions, because this would require organizational inquiry into double-loop issues, and O-I systems militate against such inquiry” (p. 111). Model II theories-in-use, on the other hand, help organizations move toward O-II learning systems with a capacity for productive organizational inquiry and doubleloop learning. Cross-References ▶ Knowledge Management ▶ Learning Cycle ▶ Organizational Learning D 1037 References Argyris, C. (1991). Teaching smart people how to learn. Harward Business Review, 69(3), 99–110. Argyris, C., & Schön, D. A. (1978). Organizational learning, a theory of action perspective. Reading: Addison-Wesley. Argyris, C., & Schön, D. A. (1996). Organizational learning II: Theory, method and practice. Reading: Addison-Wesley. Dreaming ▶ Dreaming: Memory Consolidation and Learning Dreaming: Memory Consolidation and Learning QI ZHANG Sensor System Inc, Madison, WI, USA Synonyms Dreaming; Learning; Memory consolidation Definition Dreaming refers to the subjective conscious experience we have during sleep. The experience is often vivid, intense, bizarre, and emotional. Memory consolidation is considered a neural process by which hippocampus-dependant memory becomes independent and is consolidated into the general neocortex over a long period of time from weeks to years. Learning, in here, refers to the acquisition process of long-term memory for semantic knowledge and implicit knowledge. Numerous psychological and functional studies have concluded that the quality of learning depends on the quality of dream sleep. The discovery of hippocampal reactivation supports the theory of memory consolidation, that is, daily experience is reactivated and consolidated during dream sleep. The reactivated past experience is also believed to be the source from which the neocortex learns knowledge. D 1038 D Dreaming: Memory Consolidation and Learning Theoretical Background Mankind is not the sole species that is able to dream. Almost all mammals have rapid eye movement (REM) during sleep that is considered the signature of dreaming. Why do we dream? Many functions of dreaming have long been proposed to answer this question. The functions are initially based on psychophysiological studies that traditionally study dream reports (dream recalls) and impacts of dream deprivation. The proposed functions include learning, memory consolidation, problem solving, creativity, psychological balance and growth, and disguised attempt of wish fulfillment, etc. For example, unprepared learning, which is difficult to master, is especially dependent of the quality of REM sleep; skill learning that requires significant concentration is followed by increased REM sleep; semantic knowledge learned in daytime is recalled better the next morning after a night of good sleep. However, some researchers conclude that dream may not have any cognitive function because dreams are random thoughts or random impulses. For example, when studying dream reports, it is often found that daily experiences are replayed in the form of segments rather than entire episodes, which corresponds to the bizarreness of dreaming. The randomness of dreaming is firstly reported by Hobson and McCarley (1977), who conclude that dreams are caused by random signals arising from the pontine brainstem during REM sleep, and the forebrain then synthesizes the dream and tries its best to make sense out of the nonsense it is presented with. Although later studies suggest that the brainstem is not the key source of signal that causes dreaming and REM is not the only sleep state that generates dream report, the random activation is widely agreed as the common characteristic of dreaming, and becomes a required mechanism of any proposed cognitive function of dreaming. Neuropsychological study, equipped with single neuron recording and neuroimaging technologies, has revealed detailed neuronal and cortical activities that reinforce the view that dreaming may play an important part in memory consolidation. Firing of the neural assemblies within the ▶ hippocampus, which was activated in daily activities, has been recorded and reported by Pavlides and Winson (1989) and many other researchers. The firings were recorded in humans and rats during both REM and non-REM (NREM) sleep. Synchronized activity in the hippocampus and neocortex in sleep is also reported. In many of these studies, the hippocampal firing is explicitly attributed to the reactivation of episodic memory and considered the source of dreaming. Since dreaming is considered a cognitive process that can result in memory consolidation and learning, the understanding of these functions becomes an understanding of human memory system, its organization and learning process. Human long-term memory can be fractionated into explicit (declarative) memory and implicit (nondeclarative) memory. Implicit memory encompasses priming, perceptual learning, and procedural skills, etc. Explicit memory can be further divided into ▶ episodic memory and ▶ semantic memory (Tulving 1972). Episodic memory refers to memory for events (past experiences); one must retrieve the time and place of occurrence in order to retrieve the event. Semantic memory refers to relatively permanent and generic knowledge of the world, or factual knowledge. The division has its neurobiological basis: episodic memory is correspondent to the hippocampal complex (including the hippocampus and its surrounding areas), and semantic memory to the general neocortex. The hippocampus is extremely important to one’s explicit memory. When a person’s hippocampus is entirely damaged, the person will never be able to recall any experience that occurred after the onset of the damage, and become almost impossible to learn new semantic knowledge. However, some of past experience that occurred before the onset may still be recalled, which is attributed to memory consolidation. There are two kinds of views about memory consolidation. In one view (e.g., Squire and Alvarez 1995), episodic memory is initially stored in the neocortex as discrete pieces of information, and the associations (traces) among the pieces are stored in the hippocampus. During consolidation, the discrete information slowly becomes associated under the influence of the activated traces. When the association is fully established, episodic memory becomes independent of the hippocampus. In the other view (e.g., McClelland et al. 1995; Zhang 2009), episodic memory is initially stored in hippocampus. Memory consolidation is a training process in which the hippocampus slowly teaches the hippocampal representations (episodic memory) into the neocortex. In such a slow process of consolidation, episodic memory has to be revived constantly and repeatedly. Squire and Alvarez (1995) suggest that dreaming may be the best answer to explain why the consolidation process does not regularly intrude into our waking consciousness (Fig. 1). Dreaming: Memory Consolidation and Learning Dreaming: Memory Consolidation and Learning. Fig. 1 The neocortex and hippocampus. Both the neocortex and hippocampus are split into left and right hemispheres Human semantic memory is considered meaningbased and is abstracted factual knowledge acquired from past experience. The acquisition process involves meaning perception, abstraction, generalization, and organization. The mechanisms of the subprocesses are fundamental and open questions of cognitive science. Since episodic memory is the prerequisite of semantic learning, semantic knowledge may only be acquired from the reactivations of episodic memory associated with or stored in the hippocampus. Thus, dreaming (as well as, memory consolidation) is also considered the process of semantic learning (e.g., Zhang 2009). During semantic learning, reactivated information repeatedly accesses the neocortex and slowly causes long-lasting synaptic modification within associated neural assemblies, which is believed to be necessary and sufficient for the acquisition of long-term memory in the general view of neural learning mechanism. Important Scientific Research and Open Questions As we go to sleep, we slowly sink down into NREM sleep. After an hour or two, the first REM period begins and lasts a few minutes. Then, we sink back into NREM sleep again. This cycle occurs about every 90 min. Toward the end of the night, the REM periods get longer. Almost all mammals have the NREM–REM cyclic alternation in sleep, which suggests not only a shared mechanism D across species but a universal functional significance. Dreams are reported in 70–95% of REM awakenings, and in 5–10% of NREM awakenings. Thought-like recalls are reported in 43–50% of NREM awakenings. The proposed functions of dreaming require an activated brain during dream sleep, which is proved to be the case by functional imaging studies. In REM sleep, the brain is almost as active as awaking, except for the primary sensory areas and motor output areas. Specifically, activity in the hippocampus, entorhinal cortex, and other parahippocampal regions is increased during REM sleep relative to both waking and NREM sleep. The following areas are relatively less active in NREM than in REM sleep: brainstem, midbrain, anterior hypothalamus, hippocampus, caudate, medial prefrontal, caudal orbital, anterior cingulate, parahippocampal, and inferior temporal cortices. The discovery of the reactivation of recent waking patterns of neuronal activity within the hippocampus (i.e., hippocampal firing) in dream sleep plays an extremely important role to reinforce the proposals of dream’s cognitive functions and to identify the source of dreaming. In such studies, the subjects (e.g., rats) are implanted with microelectrode arrays to record multiple single-cell activity in the hippocampus. The hippocampal firing patterns are recorded during specific training tasks, performances of the tasks, and in REM and NREM sleep. The distinctive firing pattern corresponding to the specific task is replayed and recorded in REM and NREM sleep. Memory consolidation and knowledge learning in dreaming has a causal relationship with the dissociation between episodic memory and semantic memory. The most studied amnesic patient, H.M., has a special position in the understanding of the dissociation. After H.M.’s hippocampus was surgically removed due to intractable epilepsy, he became amnesic and “every day is a living hell” as he described. He could not recall any experience that occurred after the onset, but he could recall certain experience that occurred before the onset (the earlier the experience from the onset, the better the recall), and he hardly learned any semantic knowledge thereafter. Several studies have simulated the memory consolidation process using connectionist networks (e.g., McClelland et al. 1995; Squire and Alvarez 1995) and cognitive network (Zhang 2009). In the simulations, the episodic memory is randomly activated that coincides with the key characteristic of dreaming. The only 1039 D 1040 D Drill and Practice in Learning (and Beyond) simulation of semantic learning in dreaming is reported by Zhang (2009), in which the source of dream learning is randomly activated segments of past experience in a hippocampal-like storage. Importantly, the learned conceptual knowledge is demonstrated to be flexible. Flexibility is the key criterion of semantic knowledge/memory. Researchers now have less doubt about the memory consolidation function of dreaming after the discovery of hippocampal replay of waking activation patterns. Semantic learning from randomly activated experience is computationally demonstrated possible, but the direct neurobiological evidence is yet to be discovered. The capacity of dreaming is a gift from billion years of evolution, and its impact likely affects every aspect of human cognitive capacity. On the other hand, our knowledge about human brain, among most complicated systems in the universe, is very limited; there should be a long way before we can entirely understand dreaming, its functions, and corresponding neuropsychological mechanisms. Cross-References ▶ Amnesia and Learning ▶ Basal Ganglia Learning ▶ Computational Models of Human Learning ▶ Concept Learning ▶ Human Cognitive Architecture ▶ Linking Fear Learning to Memory Consolidation ▶ Memory Consolidation and Reconsolitation ▶ Neuropsychology of Learning ▶ Procedural Learning ▶ Song Learning and Sleep References Hobson, J. A., & McCarley, R. W. (1977). The brain as a dream-state generator: An activation-synthesis hypothesis of the dream process. The American Journal of Psychiatry, 134, 1335–1348. McClelland, J. L., McNaughton, B. L., & O’Reilly, R. C. (1995). Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological Review, 102, 419–457. Pavlides, C., & Winson, J. (1989). Influences of hippocampal place cell firing in the awake state on the activity of these cells during subsequent sleep episodes. The Journal of Neuroscience, 9, 2907–2918. Squire, L. R., & Alvarez, P. (1995). Retrograde amnesia and memory consolidation: a neurobiological perspective. Current Opinion in Neurobiology, 5, 169–177. Tulving, E. (1972). Episodic and semantic memory. In E. Tulving & W. Donaldson (Eds.), Organization of memory (pp. 381–403). New York: Academic. Zhang, Q. (2009). A computational account of dreaming: Learning and memory consolidation. Cognitive Systems Research, 10, 91–101. Drill and Practice in Learning (and Beyond) CHAP SAM LIM1, KEOW NGANG TANG1, LIEW KEE KOR2 1 School of Educational Studies, Universiti Sains Malaysia, Minden, Penang, Malaysia 2 Department of Mathematical Sciences, Universiti Teknologi MARA Malaysia, Merbok, Kedah Darulaman, Malaysia Synonyms Manipulative practice; Repetitive learning; Rote learning; Routine exercise; Systematic repetition Definition The term drill and practice is defined as a method of instruction characterized by systematic repetition of concepts, examples, and practice problems. Drill and practice is a disciplined and repetitious exercise, used as a mean of teaching and perfecting a skill or procedure. As an instructional strategy, it promotes the acquisition of knowledge or skill through systematic training by multiple repetitions, rehearse, practice, and engages in a rehearsal in order to learn or become proficient. Similar to memorization, drill and practice involves repetition of specific skills, such as spelling or multiplication. To develop or maintain one’s specific skills, the subskills built through drill and practice should become the building blocks for more meaningful learning. Theoretical Background The history of drill and practice can be dated back to the 1940s when educational psychologists introduced instructional tools such as programmed instruction that employed behaviorism learning theory. Edward L. Thorndike (1874–1949), John B. Watson (1878–1958), and B. F. Skinner (1904–1990) were among several Drill and Practice in Learning (and Beyond) distinguished advocates of behaviorist theory. Behaviorism regards learning as a measurement of observable changes in a learner’s behavior. In order to bring about the behavioral changes, the learner practices repetition of a new behavioral pattern until it becomes automatic and habitual to the learner. A popular example is the “Skinner’s Box” that used operant conditioning to train rats to remember the way out of a maze. Operant conditioning rewards an act that approaches a new desired behavior through the use of positive and negative reinforcement. Drill and practice is rooted in the theory of behaviorism. It focuses on the repetition of stimulus– response practice that leads to strengthening of habits and consequently facilitates mastery of content learning. Similar to behaviorist’s ideology, drill and practice operates on the concept of reinforcement. It trains learners repeatedly to master new information or perform a new act through repetitive exercise and rewarding them with immediate feedback. Relating to the mechanism of knowledge acquirement, cognitive psychologists believe that the fundamental units of knowledge or skill can often be broken down into smaller units or subskills. Drill and practice emphasizes on repetition, remedial action, and feedback to reinforce the mastering of each subskill as well as to automate certain performance. Through drill and practice, learners are trained repeatedly to achieve automated performance in the lower-level subskills before they proceed to the higher level of complex skill. A good example is the training method employed by the physical fitness trainer or the music teacher. In their training, they use systematic and repetitive practice to train learners from basic to advance level until they can effortlessly execute the more complicated movements without much energy and hard thinking. Merrill and Salisbury (1984) address the attainment of this kind of level as automaticity whereby the ability to do things requires less and less attention and it happens naturally without occupying the mind with other ongoing cognitive processes. In other words, automaticity is a response that has become automatic or habitual to a learner. On the relationship of brain and learning, cognitive scientists who used functional magnetic resonance imaging (fMRI) or MRI scan found that there is an actual shift in brain activation patterns when untrained facts are learned. This observation can be justified by D the fact that when a learner can retrieve automatically intermediate steps (such as an algorithm stored in the working memory), then he/she can probably solve a complex computation quicker without much cognitive interferences (Delazer et al. 2004). In other words, automatic retrieval of facts cuts down the shift in brain activation patterns and helps to reduce the load in the working memory (i.e., cognitive load). Furthermore, according to neuroscientists, repeated experience is necessary in forming connections or synapses between the brain cells. Therefore, drill and practice is important because the brain will retain the learning experience longer when these connections are strengthened and reinforced through repetitions. For instance, most mathematics teachers would encourage their students to memorize the multiplication table through drill and practice to improve retention. Consequently, students can easily recall the multiplication facts since it has become automated. Drill and practice is a teaching technique conceivably practiced most frequently by teachers as compared to many other pedagogic methods. It is commonly used by school teachers to drill students on the past year examination questions in pursuit of excellent academic examination results. This teaching technique is also effective especially in helping less-able students to score in a fact-based examination. An effective drill and practice training takes on the following measures: ● Organize and structure the lessons/activities so as to ● ● ● ● put emphasis on reinforcement of previously learned concepts. Differentiate the teaching materials according to levels of difficulties. Provide immediate feedback or answers to each problem solved. Set up a management system to keep track of learners’ progress. Allow learners to master the learning materials at their own pace. In addition, effective drill and practice training needs to recognize the type of skill to be developed and the appropriate strategies to develop these competencies. The training may focus on a specific subject area (such as reading or mathematics); or a part of the subject area (e.g., spelling or addition). Alternatively, it may expand to improve different skills involving 1041 D 1042 D Drill and Practice in Learning (and Beyond) several areas of the curriculum, for example, music, physical exercises, swimming, etc. On the whole, drill and practice is used by teachers as a reinforcement tool when they perceive that their students need more practice to reinforce basic skills. Drill and practice is commonly conducted in a variety of methods that includes paper-and-pencil worksheets, tutorials, as well as teacher’s questioning technique. It is also most commonly employed by mathematics and language teachers. Important Scientific Research and Open Questions With the recent advancement in computer technology, paper-based drill and practice has given way to computer-based system of drill and practice. Computer-based system of drill and practice is gaining its popularity due to having several features over the conventional system. These features include its ability to respond to individual differences as well as to provide students with immediate corrective and instructional feedback. In addition, with the personalized and interactive nature of most software, this type of practice allows programmed and extended practices. Computer-based self-instruction uses drill and practice to teach and rewards learners with each correct response. It operates on structured reinforcement to increase learner acquisition of basic skills. Drill and practice software packages normally offer questionand-answer activities supplemented with appropriate feedback. In this type of packages, learners are allowed to select an appropriate level of difficulty for which specific content materials are set to start. The software packages may also provide drill through games. The inclusions of a gaming scenario, as well as colorful and animated graphics are all set to motivate learners to respond to the questions quickly and accurately. Thus, effective drill and practice software usually provides immediate feedback, corrections of errors, explanation of how to get the correct answers, as well as a tracking management system on student progress. Nevertheless, to what extent drill and practice of basic skills can contribute to the achievement of literacy and higher-order thinking skills? Does it apply to all fields or should it be differentiated between academic and physical training? Drill and practice has been one of the key features in school mathematics. Li (2006) argued that while mathematics educators in the West regard drill and practice as rote learning and merely imitatively behavioral manipulation, China and East Asian countries consider routine or manipulative practice as an important learning style. In fact, Li (2006) linked drill and practice to the cultural belief of “practice make perfect.” He cited a statement in the Confucius’ Analects (论语) that 学而习之,不于悦乎 which can be translated as “it is a pleasure to learn and practice again and again.” Hence, many teachers and students in China believe that “through imitation and practice again and again, people will become highly skilled” (Li 2006, p.130). Furthermore, practice makes perfect which is 熟能生 巧 and can be interpreted at two levels. At the first level, the students are merely doing repetitive manipulation or routine exercise. At the second level, students will be given the similar problems or exercises in various forms. Hence, the meaning of the word, 熟 is beyond “practice” or “do.” It means both familiarity as well as proficiency. Practice is thus vital to become proficient or to be perfect in any skills, such as musical, physical or cognitive skills. Yet, there remains controversial debate whether it is worthwhile to spend the time and effort in drill and practice, as opposed to more conceptual, explorative, innovative, and creative activities. Repetitive and extensive practice may promote automaticity that makes learning effortless and less cognitively demanding. However, will the automaticity reduce or prevent individual’s creativity? Cross-References ▶ Behaviorism and Behaviorist Learning Theories ▶ Feedback in Instructional Contexts ▶ Repeated Learning and Cultural Evolution ▶ Repetition and Imitation: Opportunities for Learning ▶ Skinner B.F. (1904–1990) References Delazer, M., Domahs, F., Bartha, L., Brenneis, C., Locky, A., & Trieb, T. (2004). The acquisition of arithmetic knowledge – an fMRI study. Cortex, 40, 166–167. Li, Shiqi. (2006). Practice makes perfect: A key belief in China. In F. K. S. Leung, K.-D. Graf, & F. J. Lopez-Real (Eds.), Mathematics education in different cultural traditions – A comparative study of East Asia and the West (The 13th ICMI Study, Vol. 9, pp. 129–138). Merrill, P. F., & Salisbury, D. (1984). Research on drill and practice strategies. Journal of Computer Based Instruction, 11(1), 19–21. Drug Conditioning Drive ▶ Academic Motivation ▶ Motivation and Learning: Modern Theories D other USs. These interoceptive drug stimuli may also disambiguate when some other CS will or will not be paired with a US, thus having a modulatory or occasion setting role. Theoretical Background Drive-Reduction Theory The idea that responses are reinforced when they reduce a drive, where drives are aversive states such as hunger or thirst that reflect an unmet need of the organism. Dropout ▶ Learning Retention ▶ Research of Learning Support and Retention Drug Conditioning RICK A. BEVINS1, THOMAS J. GOULD2 1 Department of Psychology, University of Nebraska-Lincoln Burnett, 19A, Lincoln, NE, USA 2 Temple University, Philadelphia, PA, USA Synonyms Conditioned sensitization; Conditioned tolerance; Pavlovian drug conditioning Definition Researchers in the field of drug conditioning investigate Pavlovian (classical) conditioning processes involving ingested or administered ligands (drugs). These ligands may be studied as reinforcers or unconditioned stimulus (US) paired with conditional stimuli (CSs) such as a discrete environmental cue, situation, or context. Alternatively, the stimulus effects of drugs may be studied as interoceptive contexts or cues paired with 1043 Ingested substances like cigarette smoke, coffee, evening cocktail, pain relievers, etc., contain molecules that can broadly affect nervous system processes that translate into measurable changes in physiology and behavior. If repeatedly ingested, some of these effects may enter into associations with other stimuli that are present through Pavlovian conditioning processes. At the most fundamental level, Pavlovian (or classical) conditioning refers to the establishment of a relation between two stimuli – one termed CS or conditional stimulus and the other termed US or unconditioned stimulus. Conditioning is said to occur when the CS comes to control or modify a response that it previously had not. For good conditioning, the CS and US must occur close in time, with CS onset before US onset; closer spatial proximity between the stimuli will typically facilitate conditioning. As a familiar example, in the experiments of Ivan Pavlov and his colleagues with dogs, the ringing of a buzzer (the bell) or click of a metronome was the CS, and the food was the US. With repeated pairings of buzzer CS-food US, the buzzer came to elicit salivation on its own (termed conditioned response or CR). Most often, drug conditioning refers to the drug as the US and the exteroceptive stimuli present at the time as the CSs. Take as an example smoking. From this perspective, the cigarette, lighter, smell, and taste of smoke, throat irritation, fellow smokers, and smoking areas (car, living room, pub) are potential CSs that are repeatedly paired with the physiological effects of nicotine – the primary addictive constituent of tobacco. Through these pairing, the smoking-related CSs come to evoke CRs that are often referred to as withdrawal, cravings, urges, etc., that lead to drug seeking and precipitate relapse. As will be detailed in the following paragraphs, these CRs can be enhancement of approach and other drug-like responses to the CS (conditioned sensitization) or can be a CR that opposes a drug effect (conditioned tolerance). Much effort has been spent by researchers to understand the behavioral and physiological processes D 1044 D Drug Conditioning underlying drug conditioning in which environmental stimuli come to control approach and other drug-like conditioned responses. The development of animal models to study such effects in more detail reflects the translational link between the clinical and human laboratory observations that presentation of drugassociated paraphernalia to an addict will increase drug urges, cravings, wanting, as well as changes in physiological measures (Little et al. 2005). Two widely used animal models include conditioned place preference and conditioned locomotor sensitization. In the place conditioning procedure, the animal (usually a rat or mouse) receives a drug paired with one environment (the context CS). On a separate day, the animal is exposed to a second distinct environment without the drug. This procedure is typically repeated until the context CS has been paired several times with the drug US. In a later choice test, the animal is given unrestricted access to both environments. If the drug US has rewarding effects, then more time will be spent in the drug-paired environment. This increase in time spent in the paired environment is thought to reflect an anticipatory approach CR evoked by the environment CS and has been described as a model of drug seeking. In the conditioned locomotor sensitization task, the animal receives an environment (the CS), in which activity can be measured, paired repeatedly with the drug US (note that there is not a second environment as in the place conditioning task). Activity tends to increase across repeated administration with stimulant drugs such as amphetamine, methamphetamine, cocaine, and nicotine (termed locomotor sensitization). Conditioning to the context CS is assessed in either a drug-free test day and/or a drugchallenge test. Rats that had the environment paired with drug are more active than controls that receive equal exposure to drug in the homecage (termed conditioned locomotor sensitization or conditioned hyperactivity). Research from these tasks, and others, has spawned a class of theories of addiction based on sensitization of behavior and neural processes that leads to increased motivation for drug and compulsive use that is mediated in part by these drug conditioning processes (e.g., Robinson and Berridge 2008). Another theory of addiction, the opponent process theory, may help to explain some drug conditioning effects. In the opponent process model, the effects of a drug change with repeated administration over time because the body produces an adaptive response to counter the effects of the drug and return the body to homeostasis (Koob et al. 1997). This process may in part explain why higher doses are needed to evoke a response similar to the initial effects of the drug, or in other words why tolerance develops. In addition, in the absence of the drug, the body may be in a state below homeostasis, and this may contribute to withdrawal symptoms and/or drug cravings. Initially, the drug evokes this response, but this drug effect can become conditioned to external stimuli such that the stimuli can then evoke a conditioned change in homeostasis in anticipation of drug administration. This effect can result in conditioned tolerance (Siegel et al. 2000). With conditioned tolerance, a higher dose of a drug can be tolerated when external stimuli associated with prior administration of the drug are present. If such stimuli are absent, administration of the drug may lead to an overdose. Conditioned tolerance can occur with discrete stimuli such as a syringe and with compound stimuli such as a context. The ability of these CSs to evoke an adaptive response counter to the anticipated effects of a drug may also explain in part why drugassociated stimuli can evoke drug craving and seeking behavior when they are presented in the absence of the drug. Many drugs ingested by people also have perceptible stimulus effects. Anecdotally, this is reported as jittery when one has had too much of a caffeinated beverage or drowsy after taking some cold medications. Thus, drug conditioning can also refer to these interoceptive stimuli functioning as a CS. In this role, the CS becomes associated with other USs. This area of drug conditioning has received much less attention than that of a drug as the US. In the smoking example, the CS effects of nicotine may be paired with alcohol, socializing, work breaks, relief from stress, etc. As in the earlier example, the nicotine CS will come to evoke CRs that reflects this conditioning history. If these USs are hedonically positive, as in the examples just given, then this conditioning may increase the desire or motivation for nicotine, hence increasing Drug Conditioning the tenacity of the smoking habit. Drug stimuli may also serve as modulators or occasion setters. In this case, the drug state signals whether or not the CS will be followed by a US. As an example, the nicotine stimulus may occasion when the outside smoking area at work (CS) is associated with relief from stress of job (US). Drug conditioning research on drugs as Pavlovian stimuli (CSs or occasion setters) in humans is limited to a few demonstrational studies. Thus, what is known to date has relied on animal models – mostly using rats (Bevins and Murray 2011). This research has shown that drugs from different classes (stimulant, anxiolytic, hallucinogen, etc.) can serve as a CS and/or as an occasion setter. One of the most widely studied drug CSs is nicotine. That research uses a discriminated goal-tracking task in rats. In that task, rats are given intermixed nicotine and saline sessions. There is intermittent access to liquid sucrose on sessions where nicotine was injected pre-session. No sucrose is available on saline sessions. Nicotine comes to differentially control anticipatory approach and food-searching CR near the site where sucrose is delivered (i.e., goal-tracking). This drug conditioning follows many of the rules established in more widely studied tasks using exteroceptive stimuli such as tones or lights as CSs. For example, the magnitude of the nicotine-evoked CR is affected by the quality and frequency of the US, the salience of the stimulus (i.e., nicotine dose), and the removal of the US (i.e., extinction). The occasion setting version of this task merely adds presentations of a discrete CS such as a light or white noise to each session. If drug is a positive feature, then on drug sessions the CS will be paired with the US; CS presentation will be non-reinforced on saline sessions. In this case, the discrete CS (e.g., 15 s of light illumination) will evoke a goal-tracking CR only when the drug has been administered. A drug may also serve as a negative feature. Here, the discrete CS is paired with the US only on no-drug (saline) sessions. Rats learn to withhold responding to the light on nicotine sessions. Other drug conditioning research in this area has used the conditioned taste aversion paradigm to study drugs as occasion setters. In this discriminated taste aversion task, a drug such as morphine will disambiguate when a novel taste (CS) will be paired D with lithium chloride induced illness (US) in thirsty rats. Thus, drinking the taste CS is withheld if the drug is the positive feature signaling that illness will follow consumption of the tastant. Alternatively, the rat will readily consume the tastant if morphine signals no illness. Important Scientific Research and Open Questions The bulk of drug conditioning research has been in the area of drug addiction. Basic research in this area has informed treatment approaches. For instance, cue-exposure therapies reflects the translation of research on extinction learning to clinical attempts to reduce reported cravings or urges evoked by drugrelated CSs. Extinction refers to presentation of the CS without the US. Non-reinforced presentation of the CS results in a decrease in conditioned responding evoked by the CS. This decrease reflects new learning that competes with or inhibits the earlier learning. Cue-exposure therapy attempts to create this competing learning by presenting the drugrelated stimuli (cigarette, ashtray, guided imagery of smoking place) without exposure to the drug US (nicotine). Such approaches have had limited success. Future research examining ways of increasing the effectiveness of cue-exposure therapy and identifying alternative approaches will be important (cf. Conklin and Tiffany 2002). Drug conditioning research in other important health areas is quite limited. There is some research suggesting that drug conditioning processes are involved in antipsychotic medication effectiveness, insulin treatment for diabetes, and adherence to chemotherapy schedules. In this latter example, it appears that hospital/clinic-related stimuli (the CS) become associated with the illness induced by chemotherapeutic agents (the US). These treatment-site cues (smell of hospital, sight of clinic, etc.) come to evoked anticipatory nausea (see earlier discussion of conditioned taste aversion). This CS-evoked nausea can lead to patients skipping critical treatment appointments. From a drug conditioning perspective, antiemetic medications are helpful because they blunt US magnitude (i.e., degree of nausea), thus decreasing conditioning to the treatment CSs. There is little doubt about the importance of drug 1045 D 1046 D Dual Enrollment conditioning effects. However, it is still unclear how ubiquitous they may be and to what extent they impact a particular individual or health area. Finally, as detailed earlier, there is very little drug conditioning work in either human or nonhuman animals examining the mechanism or import of their interoceptive stimulus effects serving as CSs or occasion setters. What work has been done suggests that increased research and consideration of such effects will enhance our understanding and treatment approaches to many health issues. Cross-References ▶ Amphetamine, Arousal and Learning ▶ Associative Learning ▶ Conditioning ▶ Context Conditioning ▶ Directed Associations and (Incentive) Learning ▶ Extinction Learning, Reconsolidation and the Internal Reinforcement Hypotheses ▶ Habit Learning in Animals ▶ Pavlov, Ivan P. (1849–1936) ▶ Pavlovian Conditioning ▶ Place Preference Learning References Bevins, R. A., & Murray, J. E. (2011). Internal stimuli generated by abused substances: Role of Pavlovian conditioning and its implications for drug addiction. In T. Schachtman & S. Reilly (Eds.), Associative learning and conditioning: Human and animal applications. New York: Oxford University Press. Conklin, C. A., & Tiffany, S. T. (2002). Applying extinction research and theory to cue-exposure addiction treatments. Addiction, 97, 155–167. Koob, G. F., Caine, S. B., Parsons, L., Markou, A., & Weiss, F. (1997). Opponent process model and psychostimulant addiction. Pharmacology, Biochemistry and Behavior, 57, 513–521. Little, H. J., Stephens, D. N., Ripley, T. L., Borlikova, G., Duka, T., Schubert, M., Albrecht, D., Becker, H. C., Lopez, M. F., Weiss, F., Drummond, C., Peoples, M., & Cunningham, C. (2005). Alcohol withdrawal and conditioning. Alcoholism, Clinical and Experimental Research, 29, 453–464. Robinson, T. E., & Berridge, K. C. (2008). Review. The incentive sensitization theory of addiction: Some current issues. Philosophical Transactions of the Royal Society B: Biological Sciences, 363, 3137–3146. Siegel, S., Baptista, M. A., Kim, J. A., McDonald, R. V., & Weise-Kelly, L. (2000). Pavlovian psychopharmacology: The associative basis of tolerance. Experimental and Clinical Psychopharmacology, 8, 276–293. Dual Enrollment ▶ Interactive Skills and Dual Learning Processes Dual-Process Models of Information Processing HEATHER M. CLAYPOOL1, JAMIE O’MALLY2 JAMIE DECOSTER3 1 Department of Psychology, Miami University, Oxford, OH, USA 2 Department of Psychology, University of Alabama, Tuscaloosa, AL, USA 3 Institute for Social Science Research, University of Alabama, Tuscaloosa, AL, USA Synonyms Two-process models; Dual-process theories Definition Dual-process models of information processing contend that humans use two different processing styles. One is a quick and automatic style that relies on well-learned information and heuristic cues. The other is a qualitatively different style that is slower, more deliberative, and relies on rules and symbolic logic. Dual-process models have been developed to explain specific psychological phenomena, such as persuasion, person perception, attribution, and stereotyping. Other more generalized models have been proposed to simultaneously explain processing across a variety of domains. The labels applied to the processing styles vary from theory to theory. The automatic style has been labeled “heuristic,” “peripheral,” “experiential,” “impulsive,” and “associative,” whereas the more deliberative style has been labeled “systematic,” “central,” “rational,” “reflective,” and “rule-based.” In the remainder of this entry, the former type of processing is referred to as “automatic” and the latter as “controlled.” Theoretical Background Dual-process models of information processing are plentiful in psychology, especially in the areas of social Dual-Process Models of Information Processing and cognitive psychology. The theoretical underpinnings of these models can be traced back as far as William James and Sigmund Freud, who both postulated two types of reasoning, one that is associative and another that is analytical and rational. In the modern era, the building blocks of various dual-process theories vary depending on the area in question. In social psychology, dual-process models began in the area of attitude change and persuasion with the publication of two similar theories at approximately the same time: the Elaboration Likelihood Model (Petty and Cacioppo 1981) and the Heuristic-Systematic Model (Chaiken 1980). According to these perspectives, attitudes can change from exposure to a persuasive expert either because someone mindlessly relies on a simple heuristic, like “trust experts,” or because someone carefully considers the specific arguments contained in a message. Fairly superficial processing occurs in the first case, whereas deep and careful processing occurs in the second. By the turn of the century, dual-process models had become so popular in social psychology that an entire book was published detailing them (Chaiken and Trope 1999). Though the many dualprocess models use different terminology and disagree on some important issues, there is a great deal of conceptual similarity among them (Smith and DeCoster 2000). First, there is a broad consensus regarding the characteristics of the two processing modes. Second, these models tend to agree that the automatic mode operates preconsciously, with only the outcome of the processing available to conscious awareness, and that the controlled mode operates consciously, such that individuals are aware of the outcome of processing as well as the steps involved. Finally, several of the models suggest that whereas automatic processing can be performed under most circumstances, controlled processing typically occurs only when one has both the motivation and ability to do so. These notions underscore the functional benefit of having two processing styles. Given the number of cognitive operations humans must regularly perform (such as categorizing stimuli, drawing causal inferences, and making decisions), it would be impossible for anyone to carefully and deliberately D execute every action. However, humans are sometimes motivated to thoroughly and consciously consider particular categorizations, inferences, and decisions, so it is important that people are capable of giving detailed attention to important issues. The existence of two processing styles allows humans to flexibly process automatically or deliberately as necessary. Important Scientific Research and Open Questions Despite the commonalities among dual-process models, there are a number of debates in the literature related to this topic. One question is whether the two processing modes can operate simultaneously. Some models argue that processing is either automatic or controlled, whereas other models argue that both processing modes can occur in parallel. Many models assume that the automatic mode always operates because of its quick and easy nature. In these models, a person might additionally and simultaneously use controlled processing when the task is important enough and when the individual has the ability to do so. A second issue is whether the two processes are linked to different cognitive systems. Smith and DeCoster (2000) argue that the two processing modes rely differentially on two memory systems with different properties that operate simultaneously. A “slow-learning” system picks up regularities in the environment over time and thus forms the basis of a knowledge store representing a vast array of experiences. A “fast-learning” system allows for the rapid binding of information into memory from a single occurrence. According to Smith and DeCoster, the automatic mode solely makes use of information from the slow-learning system, whereas the controlled mode makes use of information from both the fastand slow-learning systems. Other dual-process models make no effort to link their processing modes to different cognitive or neurological systems. Moreover, there is increasing skepticism regarding the presence of distinct cognitive/neurological systems (e.g., Keren and Schul 2009) which, by extension, renders the linkage of different processing styles to distinct systems dubious. Thus, some researchers believe that there may be two different types of information 1047 D 1048 D Dual-Process Theories processing, but do not ascribe to the notion that the two processing modes are executed by different cognitive systems. In addition, some researchers question whether there are even two distinct processing modes. These models suggest that the numerous dualprocessing models are unnecessary, and that all processing can be explained by greater or lesser engagement of a single process. For example, the “unimodel” (Kruglanski et al. 1999) argues that all types of persuasion are qualitatively identical, involving “if-then” reasoning from “evidence,” but that the amount of processing used to develop the evidence may vary from instance to instance. Such single-process accounts tout their abilities to explain data in a more parsimonious fashion than their dual-process counterparts. SARAH A. FRASER1, KAREN Z. H. LI2 1 Department of Psychology, Centre de recherche de l’institut universitaire de gériatrie de Montréal, Université du Québec à Montréal, Montréal, Québec, Canada 2 Department of Psychology, Center for Research in Human Development, Concordia University, Montréal, Québec, Canada Cross-References Definition ▶ Automatic Information Processing ▶ Controlled Information Processing ▶ Heuristics and Problem Solving ▶ Human Information Processing Dual-task performance requires an individual to perform two tasks (i.e., Task A and Task B) simultaneously. Typically this type of performance is contrasted with single-task performance in which the individual only has to perform one task at a time (Task A or B). Motor learning occurs when an individual demonstrates relatively enduring improvements in their capability to perform a motor task after practice. References Chaiken, S. (1980). Heuristic versus systematic information processing and the use of source versus message cues in persuasion. Journal of Personality and Social Psychology, 39, 752–766. Chaiken, S., & Trope, Y. (Eds.). (1999). Dual-process theories in social psychology. New York: Guilford. Keren, G., & Schul, Y. (2009). Two is not always better than one: A critical evaluation of two-system theories. Perspectives on Psychological Science, 4, 533–550. Kruglanski, A. W., Thompson, E. P., & Spiegel, S. (1999). Separate or equal? Bimodal notions of persuasion and a singleprocess “unimodel”. In S. Chaiken & Y. Trope (Eds.), Dualprocess theories in social psychology (pp. 293–322). New York: Guilford. Petty, R. E., & Cacioppo, J. T. (1981). Attitudes and persuasion: Classic and contemporary approaches. Dubuque: Brown. Smith, E. R., & DeCoster, J. (2000). Dual-process models in social and cognitive psychology: Conceptual integration and links to underlying memory systems. Personality and Social Psychology Review, 4, 108–131. Dual-Process Theories ▶ Dual-Process Models of Information Processing Dual-Task Performance in Motor Learning Synonyms Divided attention Theoretical Background Motor learning proceeds in stages. Historically, three stages of motor learning were proposed (Fitts and Posner 1967). The first stage was named the cognitive stage, the second the associative stage, and the third the autonomous stage. One of the reasons for naming the first stage the cognitive stage is that cognitive processes are highly involved in this stage of learning. In particular, attention to the instructions and to the demands of the motor task to be learned is crucial during this stage of learning. In contrast, the autonomous stage or third stage of learning is considered a relatively automatic stage in which the motor task can be performed with little or no need for attention. Finally, as would be expected, the attentional requirements of the second associative stage lie somewhere between the requirements for the first and third stages. A paradigm commonly used to assess and distinguish the learning stages that have the highest and lowest attentional requirements (first and third stages) is the dual-task paradigm. In cognitive dual-task Dual-Task Performance in Motor Learning research, typically experimenters ask participants to perform a motor task and an auditory task simultaneously (dual task) or separately (single task). Calculation of dual-task costs (dual versus single-task performances) provides an indication of the extent to which the two tasks compete for common processes. An early model of attention assumes that there is a single, limited attentional capacity (Kahneman 1973). Therefore, when attention is divided between two tasks in the dual-task paradigm and both tasks draw on this limited capacity there should be performance decrements in one or both tasks. Subsequent dual-task experiments revealed a mixed pattern of dual-task costs or interference, indicating that the degree of interference can vary as a function of competition for input (sensory), representational, or output (vocal, motoric) processes. Nevertheless, carefully chosen dual-task combinations with minimal levels of interference have been used to determine the degree of effortful cognitive processing associated with a given task or stage of processing. The basic premise of this approach is that tasks that are well practiced or automatized should not be vulnerable to interference from a concurrent task, whereas tasks requiring effortful processing should show reduced performance when paired with a concurrent task. In terms of motor learning, dual-task performance (auditory + motor) is compared to single-task performance (motor only) in order to assess how much the performance of the auditory task interfered with motor task performance. In the cognitive stage, where attention is important to motor learning, divided attention between a motor and an auditory task will cause the greatest performance decrements (relative to other stages). Later in the autonomous stage of learning when the motor task has been well practiced and is highly automatized, performance decrements in a dual-task situation are minimized. As such, high and low levels of interference measured with the dualtask paradigm are an index of the different learning stages. Important Scientific Research and Open Questions In addition to delineating the different stages of learning, the dual-task paradigm has also been useful in clarifying what processes are involved in motor learning. In particular, many researchers have attempted to D demonstrate that implicit and explicit learning may require different attentional processes. Generally, it has been argued that implicit learning occurs without awareness and conscious control, whereas explicit learning is strategic and requires conscious processing. By this definition, explicit learning would require more attentional processes than implicit. In order to assess the attentional involvement in these two types of learning, many researchers have used the dual-task paradigm to evaluate whether the dual-task interference is less when the motor task is presented implicitly versus explicitly. Typically, implicit and explicit versions of a motor sequence task or serial reaction time task are used. In this type of task, individuals are asked to reproduce a sequence of motor events (i.e., 1-4-2-3-41-3-2) presented visually, on a piano-like keyboard. In the implicit version participants are not told there is a sequence to be learned (simply tap along with the motor events as they appear), whereas in the explicit version participants are told to try and memorize the motor events and follow the specific sequence. While the level of attentional involvement in explicit and implicit motor learning is still under debate, results do indicate that dual-task interference may be less in the initial stages of learning when the task is presented implicitly versus explicitly. However, even when the task is presented implicitly, learning is impaired when a second task is presented suggesting that some attentional processes are also required when learning is implicit. Although motor learning research is based on behavioral outcomes (i.e., improvements in speed and accuracy during the performance of a motor task) recent work involving neuroimaging techniques has helped to extend the behavioral work and clarify the brain areas associated with motor learning. As in the behavioral research, neuroimaging research has used the dual-task paradigm to assess what brain areas are more involved in the cognitive stage versus the autonomous stage. Results from neuroimaging converge with the existing behavioral results demonstrating a shift from very frontally mediated activation during early learning to more parietal and subcortical activations in later learning. Since the prefrontal cortex is strongly associated with attentional processes, these results align well with the proposal that attentional processes are more heavily recruited in the early stages of learning. 1049 D 1050 D Duncker, Karl (1903–1940) In addition to types of learning and underlying brain areas involved in motor learning, an important question assessed with the dual-task paradigm is: how do different groups learn a motor task under divided attention conditions? For example, results comparing healthy controls to individuals with Parkinson’s disease (PD) have revealed that the dual-task interference is greater in PD but providing cues during learning can significantly diminish the interference and improve motor learning. These findings and neuroimaging findings in specific populations (children, older adults, individuals with brain damage) can help to better inform what processes are involved in motor learning, what processes might be most affected by disease and injury, and what processes are most amenable to rehabilitation strategies. Cross-References ▶ Aging Effects on Motor Learning ▶ Attention and Implicit Learning ▶ Explicit Versus Implicit Learning ▶ Implicit Sequence Learning ▶ Motor Learning References Fitts, P. M., & Posner, M. I. (1967). Human performance. Belmont: Brooks/Cole. Kahneman, D. (1973). Attention and effort. Englewood Cliffs: Prentice-Hall. Duncker, Karl (1903–1940) NORBERT M. SEEL Department of Education, University of Freiburg, Freiburg, Germany Life Dates Karl Duncker was born in Leipzig, Germany, on February 2, 1903. Duncker was one of the most prominent Gestalt psychologists. He was a student and coworker of Wertheimer, Koffka, and Köhler in Berlin. His parents were the communist politicians Herman and Käte Duncker. As a consequence, Duncker’s thesis for habilitation was rejected in 1934 but was then published in 1935. In the same year, he found political asylum in Cambridge, England, where he worked with Frederic Bartlett. Finally, Duncker accepted an offer from Wolfgang Köhler and emigrated to the United States of America, where he worked at Swarthmore College in Pennsylvania until his death shortly after his 37th birthday. He committed suicide in 1940 after some time of suffering from depression (Schnall 1999). Theoretical Background Duncker stands alongside the famous trio of Gestalt psychologists (Wertheimer, Koffka, and Köhler) and contributed essential and influential research on productive thinking and creative problem solving. Furthermore, he was highly interested in the psychology of ethics and in research on the phenomenology of feelings and sensual perception. Duncker coined the term functional fixedness, which refers to difficulties in visual perception and in problem solving due to the fact that one element of a task or situation has a fixed function which inhibits the restructuring of the task or situation necessary to find the solution to the problem. Contributions to the Field of Learning In his seminal research on problem solving, Duncker worked with relatively simple laboratory tasks that appeared novel to the participants, such as the X-ray problem. This problem confronts subjects with the task of how they would destroy an inoperable stomach tumor with X-rays without destroying the surrounding healthy tissues. The subjects’ verbalizations led Duncker to infer that their problem analysis was based both on conflict analysis (i.e., spatial coincidence of X-rays and healthy tissue) and on the analysis of functional fixedness, which hindered them from finding the optimal solution. In the 1980s, the X-ray problem was often used within the realm of research on analogical problem solving (see, e.g., Gick and Holyoak 1983). Another popular example from Duncker’s research is the candle problem, which is described in nearly every textbook on problem solving. The difficulty of this problem arises from the functional fixedness of the candle box. It is a container in the problem situation but must be used as a shelf in the solution situation. Further problems Duncker used to illustrate the necessity of overcoming functional fixedness were a task in which an electromagnet had to be used as part Dynamic Modeling and Analogies of a pendulum, one in which a tree branch had to be used as a tool, and one in which a brick needed to be used as a paper weight. Duncker’s seminal work on problem solving and functional fixedness fell into oblivion for some decades. Then it exerted great influence on Simon and Newell (Newell 1985; Simon 1999) in their development of the General Problem Solver. Simon (1999) has described in detail how strongly Duncker’s theories and research influenced the emergence of cognitive science. However, Duncker not only experienced a renaissance in the United States but also in Germany, where Dietrich Dörner (1976) – probably the most prominent German researcher in the field of problem solving – referred to Duncker and rescued this great Gestalt psychologist from obscurity. D Dynamic Assessment ▶ Dynamic Testing and Assessment D Dynamic Decision Making ▶ Complex Problem Solving Dynamic Mapping ▶ Initial State Learning Cross-References ▶ Complex Problem Solving ▶ Gestalt Psychology of Learning ▶ Problem Solving ▶ Problems: Definition, Types, and Evidences References Dörner, D. (1976). Problemlösen als Inf sverarbeitung. Stuttgart: Kohlhammer. Duncker, K. (1926). A qualitative (experimental and theoretical) study of productive thinking solving of comprehensible problems. Pedagogical Seminary and Journal of Genetic Psychology, 33, 642–708. Duncker, K. (1935). Zur Psychologie des produktiven Denkens. Berlin: Springer. Gick, M., & Holyoak, K. J. (1983). Schema induction and analogical transfer. Cognitive Psychology, 15, 1–38. Newell, A. (1985). Duncker on thinking: An inquiry into process in cognition. In S. Koch & D. E. Leary (Eds.), A century of psychology as science (pp. 392–419). New York: McGraw-Hill. Schnall, S. (1999). Life as the problem: Karl Duncker’s context. From past to future: The drama of Karl Duncker. Papers on the History of Psychology, 1(2), 13–28. Simon, H. A. (1999). Karl Duncker and cognitive science. From Past to Future: The Drama of Karl Duncker. Papers on the History of Psychology, 1(2), 1–11. Duration Discrimination ▶ Temporal Learning in Humans and Other Animals 1051 Dynamic Modeling and Analogies NORBERT M. SEEL Department of Education, University of Freiburg, Freiburg, Germany Synonyms Analogy-based modeling; Dynamical analogies; Simulation model of analogy-making; Simulation of dynamic systems; Systems dynamics Definition Dynamic modeling describes the behavior of a distributed parameter system in terms of how one qualitative state can turn into another. Systems can be deterministic or stochastic (depending on the types of elements that exist in the system), discrete or continuous (depending on the nature of time and how the system state changes in relation to time), and static or dynamic (depending on whether or not the system changes over time at all). This categorization of systems affects the type of modeling: Models, like the systems they represent, can be static or dynamic, discrete or continuous, and deterministic or stochastic. Dynamic models describe the behavior of a distributed parameter system in terms of how one qualitative state can 1052 D Dynamic Modeling and Analogies turn into another. Accordingly, dynamic models contain at least two variables at two different times that are causally related: Yt ¼ f ðYt1 ; :::; Ytk Þ That means, at the time t vector Y functionally depends on the state of Y at former times t-k. The system’s dynamics consists in the fact that states or interventions in former times affect the state of Y in the point t. Dynamic models have historically been based on analogies which can be defined as likeness or similarity in some respects between things that are otherwise dissimilar. In logic analogies constitute a form of reasoning in which a similarity between two or more things is inferred from a known similarity between them in other respects. For instance, dynamic models of continuous manufacturing systems were often based on analogies with electrical systems. Or macroeconomic systems were based on analogies with circuit models. This corresponds with the conception of analogical problem solving as a process of transferring knowledge from a well-understood base domain to a new target problem area. Theoretical Background Dynamic modeling presupposes functional intentionality in constructing and using models: First, models may serve envisioning (i.e., making visible the invisible) and the simplifying of an investigation to particular and relevant phenomena in a closed domain. Secondly, and more relevant for dynamic modeling, is the use of models for simulating transformations of states of a complex system. These simulation models allow a learner to explore a dynamic system in a controlled way to understand how the system’s components interact, and how alternate decisions can affect desired outcomes. Finally, models are often constructed as analogies in order to map a well-known explanation (e.g., Rutherford’s atomic model) onto a phenomenon to be explained (e.g., quantum mechanisms). Such models are called analogy models which are heuristic hypotheses about structural similarities of different domains (Holland et al. 1986). The cognitive psychology literature speaks of analogy models when the conditions in an unknown domain are inferred and mapped in analogy to known conditions in the same or another domain (e.g., viewing mechanical phenomena in an electrodynamic model or memory processes in analogy to computers). Analogy models may be understood as heuristic hypotheses of a structural similarity between different domains. From an epistemological point of view, there are two general conceptions concerning the nature of dynamic modeling. The first conception emphasizes the representational character of modeling – the model represents reality, it is a model of something; the other conception considers the model as a cognitive artifact which is constructed intentionally in order to create subjective plausibility with regard to the original (Wartofsky 1979). In the related literature, this kind of models is called mental (or internal) model. The idea that learners create, store, and manipulate mental models of dynamic systems with which they interact has been central to the theory and practice of system dynamics since its inception. According to this view, people can understand and manage dynamic systems by constructing a mental model that helps to simulate the likely outcomes of the systems’ behavior. Learning occurs by comparing the expected results of operations on a system with the observed consequences of transformations. In the case of gaps between expectations and observations, the outcomes are used to update or revise the mental model. Dynamic modeling requires the identification of the system’s components and their interactions. Therefore, a general plan for the development of the model’s overall structure (and functions) is necessary. Mental models provide a rationale for the development of such a master plan to operate effectively with the complexity of dynamic systems. Given the central role of mental models in operating successfully with dynamic systems, it is not surprising that one of the primary goals of system dynamics interventions is to change mental models to make them more accurate, more complex, and more dynamic. In the practice of dynamic modeling, various tools and methods can be used to create analogical models. For instance, a mechanical device can be used to represent mathematical calculations, or the flow of water can be used to model economic systems; electronic circuits can be used to represent both physiological and ecological systems. Since the emergence of ▶ system dynamics this goal is accomplished by using particular computer-based tools for model building, such as STELLA, Powersim, and Model-it. These tools allow Dynamic Modeling and Analogies their users to construct their own dynamic models of complex systems (Clariana and Strobel 2008). Basically, these tools can be variable-based or agent-based. STELLA, Powersim, and Model-it are popular examples of variable-based modeling tools and operate with a particular formalized language for qualitative modeling (see in more detail: Hannon and Ruth 2001). The resulting models are called system dynamics models whose variables have no attributes but change dynamically based on the mathematical models and equations that define the relationships within the system. Model-it, for instance, allows the learner to construct qualitative models of cause and effect relationships. Through this technology, the user creates objects with which he or she associates measurable, variable quantities called factors and then defines relationships between those factors to show the effects of one factor upon another. Model-it provides facilities for testing a model and a “Factor Map” for visualizing it as a whole. Students define objects, factors, and the relationship between factors’ qualities. The student is facilitated in this modeling process by a variety of scaffolds. These scaffolds include features which (a) allow for multiple linked representations, (b) options which hide additional complexity, (c) learner guidance through subtasks, and (d) prompting for explanations for constructed relationships. The other broad class of model-building tools puts agent-based modeling at the center of development and research. An agent-based model consists in a collection of autonomous agents which interact between themselves ruled by some simple characteristics that are modeled from the observation or analysis of realworld entities in order to simulate their more complex characteristics. Agents can be considered as computational entities that can conceptually incorporate in a natural way some human mechanisms such as perception, action selection, autonomy, etc. Clariana and Strobel (2008) have pointed out that in agent-based modeling tools the agents possess different attributes and their behavior changes in accordance with a predefined set of rules to respond to the situational demands of a simulated environment. Actually, the virtual or simulated environment with its spatial dimensions provides the space where the agents move, interact, and pursue their goals. Thereby, the agents mimic the behavior of real people in a real environment. Agent-based simulation tools have been D widely used in geography (da Silva et al. 2004), social sciences (Tobias and Hofmann 2004), and construction management (Watkins and Mukherjee 2008). Both variable-based and agent-based dynamic modeling often use analogies to understand novel situations in order to “bootstrap” new knowledge based on previously learned knowledge. Actually, the formation of analogies is an effective way of dealing with complexity (cf. Hannon and Ruth 2001). The essence of analogy formation is to identify the structure of one system and compare it with structure of another system whereby the similarities and differences between the two systems are identified. Of course, the similarities between the systems generate a particular set of insights into the systems’ structural commonalities but, at the same time, the dissimilarities provoke a complementary view on both systems and show the bounds of the analogy. Inextricably linked with the formation of analogies and analogy models are conclusions by analogy for which Dörner (1976) has defined the following steps: (1) abstraction of selected attributes of the phenomenon in question (esp. as regards content), (2) the search for a model, which constitutes a concretization of the abstract phenomenon, (3) transfer of (structural) attributes of the model back to the original phenomenon, and (4) test whether the hypothesized attributes are actually present in the original phenomenon. This procedure allows realizing what abstractions are important for the formation of the analogy – and which substructures of both the original and the model can be disregarded. Important Scientific Research and Open Questions Dynamic modeling provides a new perspective called learning by system modeling and an extension to approaches of simulations: Students are building their own models and engaging at a much deeper conceptual level of understanding of the content, processes, and problem solving of the domain. Research from the area of mindtools suggests that students’ conceptual understanding and application of knowledge is much deeper and advanced when they are involved in learning by modeling (Jonassen 1999). In consequence, variablebased dynamic modeling is increasingly considered to create innovative learning environments which are consistent with how people learn: Variables can be limited to a manageable level and structure and direction for 1053 D 1054 D Dynamic Network Analysis (DNA) learning can be provided, real-world problems can be addressed, and students can take control and responsibility for their own learning progress. Dynamic modeling motivates students by keeping them actively engaged in the learning process through requiring that problemsolving and decision-making skills be used to make the simulation run. As the simulation runs, it is modeling a dynamic system in which the learner is involved and plays an active role as decision maker and problem solver. Dynamic modeling enables students to engage in systems thinking and problem solving by explorations and analogical reasoning. However, this puts high cognitive and metacognitive demands on students and requires the construction of powerful mental models (Doyle and Ford 1998; Isaacson and Fujita 2006). However, both practical experience in the field of system dynamics and controlled laboratory experiments on dynamic decision making have shown that mental models of complex systems are typically subject to a variety of flaws and limitations. For example, mental models often omit feedback loops, time delays, and nonlinear relationships that are important determinants of system behavior. In addition, the limited capacity of working memory makes it impossible for people to mentally simulate the dynamic implications of all but the simplest mental models. According to the system dynamics view, only by adopting the feedback perspective and modeling discipline of system dynamics and taking advantage of the computer’s ability to calculate the dynamic consequences of mental models can these flaws and limitations be overcome. Cross-References ▶ Analogical Models ▶ Analogical Reasoning ▶ Calibration ▶ Complex Problem Solving ▶ Computer Simulation Model ▶ Mental Models ▶ Model-Based Reasoning ▶ Modeling and Simulation ▶ Simulation-Based Learning da Silva, C. A., de Beauclair Seixas, R., & Monteiro de Farias, O. L. (2004). Geographical Information Systems and Dynamic Modeling via Agent Based Systems. In ACM-GIS’05, 13th ACM international symposium on advances in geographical information systems, 4–5 Nov 2005, Bremen, Germany. Dörner, D. (1976). Problemlösen als Informationsverarbeitung. Stuttgart: Kohlhammer [Problem solving as information processing]. Doyle, J. K., & Ford, D. N. (1998). Mental model concepts for system dynamics research. System Dynamics Review, 14(1), 3–29. Hannon, B., & Ruth, M. (2001). Dynamic modelling (2nd ed.). New York: Springer. Holland, J., Holyoak, K., Nisbett, R., & Thagard, P. (1986). Induction: Processes of inference, learning, and discovery. Cambridge, MA/ London: MIT Press. Isaacson, R., & Fujita, F. (2006). Metacognitive knowledge monitoring and self-regulated learning: Academic success and reflections on learning. Journal of the Scholarship of Teaching and Learning, 6(1), 39–55. Jonassen, D. H. (1999). Designing constructivist learning environments. In C. M. Reigeluth (Ed.), Instructional design theories and models: Their current state of the art (2nd ed.). Mahwah: Lawrence Erlbaum. Tobias, R., & Hofmann, C. (2004). Evaluation of free Java-libraries for social-scientific agent-based simulation. Journal of Artificial Societies and Social Simulation, 7(1). http://jasss.soc.surrey.ac.uk/7/ 1/6.html Wartofsky, M. W. (1979). Models. Representation and the scientific understanding. Dordrecht: Reidel. Watkins, M., & Mukherjee, A. (2008). Using adaptive simulations to develop cognitive situational models of human decision-making. Technology, Instruction, Cognition and Learning, 6 (3–4), 177–192. Dynamic Network Analysis (DNA) ▶ Social Networks Analysis and the Learning Sciences Dynamic Network Formation ▶ Networks, Learning Cognition, and Economics References Clariana, R. B., & Strobel, J. (2008). Modeling technologies. In J. M. Spector, M. D. Merrill, J. van Merrienboer, & M. P. Driscoll (Eds.), Handbook of research on educational communications and technology (3rd ed., pp. 329–344). New York: Lawrence Erlbaum. Dynamic Selection ▶ Multistrategy Learning Dynamic Testing and Assessment Dynamic Testing ▶ Dynamic Testing and Assessment Dynamic Testing and Assessment WILMA C. M. RESING1, JULIAN G. ELLIOTT2 ELENA L. GRIGORENKO3 1 Department of Psychology, Department of Developmental and Educational Psychology, Leiden University, Leiden, The Netherlands 2 School of Education, Durham University, Durham, UK 3 Department of Psychology, Department of Epidemiology & Public Health, Child Study Center, Yale University, New Haven, USA Synonyms Dynamic assessment; Dynamic testing; Learning potential tests; Learning tests; Testing-the-limits Definition Dynamic testing and dynamic assessment (DT/A) are two overlapping umbrella concepts for forms of testing and assessment in which instruction or intervention are integral aspects of the testing procedure. For some, dynamic assessment is a construct that can be used to describe a general approach. However, in line with more recent thinking (e.g., Sternberg and Grigorenko 2002), we differentiate between dynamic testing whereby the whole test procedure is transparent, objective, and repeatable and a wider conception of dynamic assessment in which testing is paired with clinical and individualized intervention. Compared with conventional, static testing and assessment, DT/A provides information yielding greater insight into the nature of the individual’s cognitive functioning. This should result in more valid assessment outcomes that can both aid prediction, and point to the most appropriate means of cognitive intervention. D 1055 Theoretical Background Inspired by Lev Vygotsky in the Soviet Union and Reuven Feuerstein in Israel, DT/A is a conglomerate of very different testing formats and assessment aims (e.g., Grigorenko 2009; Haywood and Lidz 2007; Sternberg and Grigorenko 2002). The nature of the different procedures is closely related to the differing aims of testing. These include: ● Predicting those who are most likely to make edu- cational progress given appropriate forms of intervention. ● Aiming to develop, improve or modify the individual’s cognitive skills or functions by providing more enduring support or mediation. ● Examining the effects of graduated prompt techniques on changes in the testee’s strategy use. Despite their different aims and orientations, researchers in this field are united in their criticisms of conventional static testing and assessment. It is argued that static measures of cognitive functioning often: ● Reflect knowledge and skills acquired in the past. This may lead to underestimates of an individual’s potential as not everyone has had equal opportunities to acquire optimal levels of knowledge and skills. ● Do not provide information that can be easily drawn upon when advising on appropriate instructional interventions. ● Overemphasize the products of learning rather than learning processes. Because of this, scores from such measures typically shed little light upon how individuals learn, or fail to learn. ● Are seen as less optimal predictors of future educational performance; dynamic scores considerably add to this prediction. Although Alfred Binet, the father of intelligence testing, was convinced by 1916 that intelligence test scores (including those from his own test), did not give an appropriate picture of the ability to learn, the theoretical underpinnings for DT/A were provided many years later by, among others, Vygotsky and Feuerstein. Using the concept of the zone of proximal development (ZPD), Vygotsky distinguished the level of actual development, at which a person is able to function D 1056 D Dynamic Testing and Assessment unassisted, from the level of potential development, which concerns problem solving that is benefited by guidance from parents, teachers, or peers. Individuals might show a discrepancy between their actual and potential level of development; such a discrepancy might be evident in multiple domains. The zone of proximal development concerns that area that defines the difference between an individual’s independent and guided performance on any activity, closing this discrepancy. The more that individuals can raise their performance, having received appropriate assistance, the wider the ZPD is considered to be. Vygotsky considered the assessment of one’s potential, by means of this zone of proximal development, to be the principal focus of testing and education. According to Feuerstein, conventional test scores can underestimate the cognitive capabilities of those who have had impoverished learning experiences. To assess the individual’s cognitive potential, Feuerstein and his colleagues developed the Learning Potential Assessment Device (LPAD) (1979). The training procedures of this assessment procedure emphasize individually tailored intervention procedures for a variety of cognitive tasks but also the development of general, metacognitive skills, such as systematic working and controlling one’s own impulses. The LPAD was designed to make lasting changes to the existing cognitive structure of the individual; a point of view that other researchers in DT/A dispute. Depending on the theories behind particular formats of DT/A, the form of feedback is either fixed, the same for all those tested, or individualized, applied in a tailored fashion contingent upon each individual’s ongoing performance in the test situation. With this latter approach, the nature and amount of assistance provided depend upon individual differences manifested within the testing context, including those elements of the test that the examiner decides to prioritize and act upon. Although the large body of work by Feuerstein has had a seminal influence upon many psychologists and educationalists, his approach has failed to gain acceptance by most researchers and has not led to widespread usage by clinicians. Many DT/A researchers consider that there is lack of conceptual clarity in the model, which makes operationalization of parts of the model difficult and validation problematic (Elliott et al. 2010). They point to the absence of a standardized approach, the weak test-retest and inter-rater reliabilities and the poor quality of empirical studies that have examined this approach. In response, Feuerstein contends that an assessment procedure that seeks to conform to all test requirements will lack clinical sensitivity, will fail to bring out the best from the testee, and is unlikely to provide a rich understanding of a person’s cognitive functioning. DT/A procedures that seek a more scientific approach typically examine a person’s potential to profit from training and/or determine on-the-spot learning processes that may change during testing. Here the form of assistance is standardized but can also be adaptive as the nature of the help provided will reflect individual differences in the testee’s responses. The aims here are not to engineer permanent progression or an enduring change in cognitive abilities but, rather, to aid detection of ● Progression over time as a consequence of instruction Individual differences in progression The way in which differences emerge Variability or constancy in the use of strategies Differences in the ways children verbalize their solving processes ● Changes in the posttest score order of groups of children (as a measure of potential), as a consequence of the assessment period ● ● ● ● Most DT/A procedures include a pretest, a training phase, and a posttest. During the training phase, the focus is on the process of solving the task, the feedback or prompting given, and, importantly, the behavior of the child. It is also possible to include the training phase in the pretest, or to replace it by a much longer instructional phase in the classroom, but in this latter case researchers prefer to describe this as cognitive training or a form of response to intervention (RTI). Within a fixed, noncontingent training format, people can be tested either individually or in a group. Individualized, contingent instruction however, should always be given individually. While some researchers define the difference between pre- and posttests as a measure of learning potential, it is generally accepted that these “gain scores” pose a number of significant measurement difficulties, in particular, poor reliability. For this reason, it is generally accepted that it is preferable to use posttest scores in isolation or advanced statistical procedures to overcome Dynamic Testing and Assessment these problems. Another test format, sometimes called testing-the-limits, provides contingent feedback within the administration of the static test. Here, the testee receives assistance as soon as a significant difficulty is encountered during only one single testing period. Again, intervention can be based upon a structured approach, with a predetermined hierarchy of hints, or individualized according to the particular understandings and perceptions of the tester. Typically, there is no baseline measurement with this approach. Other structured approaches provide a series of prompts or hints in response to testee errors. An example of this is the graduated prompts approach which involves the ongoing provision of the minimal amount of assistance necessary to the child until the relevant test items can be solved independently. The graduated prompts approach differs somewhat from that typically used by other dynamic test users as the final unaided performance of the testee is not the principal concern. Rather, the approach seeks to highlight the amount of assistance necessary to achieve prespecified outcomes and the capacity of the testee to transfer learned principles to novel situations. Thus the child’s learning potential is not defined on the basis of their best task performance but rather, is represented by the inverse of the minimal number of hints necessary to reach a specified amount of learning. These DT/A procedures are fully standardized, are based on detailed task and process analyses, and hint (or “prompt”) sequences are hierarchically ordered. However, there is variability to the extent that some approaches are adaptive, that is, hints are provided which are differentially contingent upon the individual’s responses. Testers who prefer to use these approaches will typically prioritize the examination of changes in problem-solving processes and the use of strategy patterns during testing. The complexity of such approaches is such that they lend themselves to computerized forms of testing. Research Outcomes and Future Questions A number of overviews and meta-analyses on the effectiveness of DT/A have been conducted (Grigorenko 2009; Sternberg and Grigorenko 2002). Swanson and Lussier (2001) concluded that DT/A substantially improved testing performance when compared with static test conditions. However, many DT/A studies (approximately 60%) had to be excluded from their D analyses because of poor methodological and/or psychometric quality. As far as the predictive validity of DT/A was concerned, results were mixed, and often dependent on the format of dynamic tests used: the more standardized the testing procedure, the better the predictive validity (e.g., Sternberg and Grigorenko 2002). In relation to the use of dynamic measures for educational and clinical practice, researchers continue to search for approaches that can yield valuable data that can inform intervention. Future research could profitably examine why a potentially very valuable form of testing continues to be used infrequently. Elliott (2003) concluded that many DT/A researchers have focused too exclusively on the use of the approach for purposes of classification and prediction. Diagnosing the nature of the child’s underlying difficulties and using this to advise upon intervention should, he argued, be the principal concern of those working in education settings. Research might also further examine the relatedness of the concept of DT/A to that of RTI (e.g., Grigorenko 2009). RTI originated in attempts to find the best way to educate children with learning difficulties by intervening as early as possible, examining their responses, and following up with evidence-based (group and individualized) forms of instruction. RTI is different from DT/A in its aim: while DT/A focuses upon the potential for learning of a child, RTI focuses on prevention and identification: the sooner evidencebased intervention strategies are implemented the better. Although RTI and DT/A originated from different backgrounds, a future research question regarding the interaction or integration of both points of view seems justified, because for both the underlying aim is to find ways to realize children’s potential. Both involve looking for children’s solving processes and adapting instructional or interventional procedures to signs of learning or change a child shows during testing. It is hoped that dynamic measures can provide educators with insights into the way in which individual differences in progression emerge, in the variability or constancy in children’s use of, and change in, cognitive and metacognitive strategies, differences in the ways they verbalize their solving processes, their reactions to failure and success, and how they react to the provision of assistance. An important question for future research is how such information can be used to guide the work of teachers and other professionals. 1057 D 1058 D Dynamic Visualization To describe or prescribe the positive and weak aspects of a child’s functioning requires not only highquality, standardized tests with sound norms but also understanding of, and sensitivity to, the unique problem-solving behaviors of children during testing. Particularly important is the design and deployment of increasingly sophisticated adaptive, scaffolded feedback procedures. High-quality tests can be seen as instruments which enable us to measure objectively, reliably, and transparently, an individual’s behavior. Findings from these should lead to descriptions of individual functioning and associated recommendations for intervention that can be understood by all who are seeking to help the testee. There is still a long way to go, and achieving successful approaches that can be employed across multiple learning contexts will require ongoing collaboration between researchers, clinicians, and practitioners. Dynamical Analogies ▶ Dynamic Modeling and Analogies Dynamical System ▶ Approximate Learning of Dynamic Models/Systems Dynamically Capable Organization ▶ Learning Organization Cross-References ▶ Assessment of Learning ▶ Diagnosis of Learning ▶ Feedback and Learning ▶ Intelligence and Learning References Elliott, J. G. (2003). Dynamic assessment in educational settings: Realising potential. Educational Review, 55, 2003. Elliott, J. G., Grigorenko, E. G., & Resing, W. C. M. (2010). Dynamic assessment. In B. McGaw, P. Peterson & E. Baker (Ed.) The International encyclopedia of education (3rd ed.), 3, 220–225. Amsterdam: Elsevier. Grigorenko, E. L. (2009). Dynamic assessment and response to intervention: Two sides of one coin. Journal of Learning Disabilities, 42, 111–132. Haywood, H. C., & Lidz, C. S. (2007). Dynamic assessment in practice: Clinical and educational applications. Cambridge: Cambridge University Press. Sternberg, R. J., & Grigorenko, E. L. (2002). Dynamic testing: The nature and measurement of learning potential. New York: Cambridge University Press. Swanson, H. L., & Lussier, C. M. (2001). A selective synthesis of the experimental literature on dynamic assessment. Review of Educational Research, 71, 321–363. Dynamic Visualization ▶ Animation and Learning Dynamics of Exploration and Exploitation ▶ Learning Adjustment Speeds and the Cycle of Discovery Dynamics of Memory: ContextDependent Updating ALMUT HUPBACH Department of Psychology, Lehigh University, Bethlehem, PA, USA Synonyms Memory modification; Memory reconsolidation Definition Memory updating is a phenomenon whereby, under certain conditions, new information can be incorporated into preexisting memories. Various aspects of the phenomenon have been described in cognitive psychology, but no unifying theoretical account exists that explains memory updating and its triggers. In Dynamics of Memory: Context-Dependent Updating neuroscience, it has been shown that the modification of a specific memory is dependent upon its reactivation. It is assumed that through reactivation, a particular memory is transferred from a passive and stable state to an active but fragile state, at which time it can then be modified. In order for the memory to be maintained, it needs to undergo a restabilization (reconsolidation) process (cf. Hardt and Nader 2009). Spatial context, i.e., the environmental surroundings in which learning takes place, has been identified as a crucial factor moderating memory updating effects. Theoretical Background Memory is not a static repository of learned facts and experienced events. Rather, memory is dynamic and reconstructive in nature. While relatively stable and fixed core knowledge seems essential for successful everyday functioning and high-level concepts such as self-identity, the ability to forget outdated information and to update memories in the light of new relevant information is equally important. The study of memory modification and updating has a long history in cognitive psychology, and includes such phenomena as the influence of story schemes on the recall of folk tales, false memory creation, the susceptibility of memories to misinformation, and the misremembering of one’s own previous answers when provided with the correct solutions (hindsight bias). The ability of memory to change seems not confined to a short time period after initial learning (as traditionally assumed by the consolidation account), but can happen long after the memory was acquired. Findings in neuroscience demonstrate that, in order to change old memories, they must first be reactivated, i.e., brought from a passive inactive state to an active state. The actual triggers responsible for reactivation and subsequent updating have rarely been studied in humans. In contrast, there has been great interest in the processes of memory reactivation and subsequent need for restabilization (reconsolidation) in the animal neuroscience literature. Recently, researchers have attempted to bridge the gap between cognitive psychology and neuroscience, highlighting parallel findings in both fields. Hardt et al. (2010) propose that reconsolidation processes could underlie many of the memory modification effects reported in cognitive psychology. Memory, however, is not a unitary system. According to one view, it can be divided into two D broad categories: explicit and implicit memories. Explicit memories are consciously accessible, whereas implicit memories can be expressed without conscious awareness. Explicit memories include semantic and episodic memories. Semantic memories contain learned facts and world knowledge for which one can commonly not recall the specific circumstances accompanying their acquisition. In contrast, episodic memory concerns the ability to remember events that can be traced back in time and space. Thus, episodic memories have a spatio-temporal signature. While the molecular mechanisms accompanying updating are most likely similar for the different memory subsystems, the circumstances that trigger memory reactivation are probably quite different. For instance, in fear conditioning in rats (a form of implicit memory), a tone that was concurrently presented with a foot shock during training later serves as a reminder, reactivating the fear memory. The tone is intrinsically linked to the particular fear-conditioning paradigm; for other forms of tasks and memory subsystems, other reminders will be effective. The present chapter is concerned with episodic memories, i.e., with an instance of explicit memory. As stated above, episodic memories include information about the place where the episode occurred. Some argue that spatial context is a superordinate memory cue: it provides a stable scaffold that allows for the integration of the various elements of an experience into a coherent event. That role makes spatial context an excellent candidate for an effective reminder. Returning to a place will most likely reactivate the specific experience associated with that place. However, analog to the fan effect or cue overload principle, context should lose its predictive and reactivating value when it is associated with many different episodes. Thus, a highly familiar context might not trigger the reactivation und updating of memories. Important Scientific Research and Open Questions Although there is extensive literature on the context dependency of episodic memories, i.e., the effect that memory performance benefits from contextual stability in comparison to context change, until recently, it was unknown whether spatial context in itself would be sufficient in returning memories associated with that context to an active state, allowing for updating. 1059 D D Dynamics of Memory: Context-Dependent Updating Hupbach et al. (2007) developed a paradigm to study memory updating in humans that is based on the animal reconsolidation work. In their paradigm, participants learn a set of objects. Forty-eight hours later, reminders are provided to some participants, but not to others, and a second set of unrelated objects is studied. Again 48 hrs later, memory for Set 1 is tested. The results show that although a reminder does not affect memory for Set 1, participants who receive a reminder incorrectly intermix objects from the second set when recalling Set 1, demonstrating that a reminder can reopen a memory and allow new information to be incorporated. It is important to note that the updating effect is unidirectional, i.e., only affecting Set 1. When asked to recall Set 2 in Session 3, intrusions from Set 1 into Set 2 are rare. Extending previous reconsolidation findings in the animal literature, the study shows that updating can be a constructive process, one that supports the incorporation of new information into old memories. But what exactly triggers memory updating? The reminder used by Hupbach et al. (2007) was multifaceted, consisting of the following three components: 1. Spatial context. Participants in the reminder group learned the second list in Session 2 in the same room in which they had learned Set 1 in Session 1. Thus, the spatial context could have served as a reminder reactivating Set 1. Participants in the no-reminder group learned List 2 in a different room. 2. Experimenter. For participants in the reminder group, the experimenter was the same in Sessions 1 and 2. Therefore, the experimenter could have served as a reminder. For participants in the noreminder group, a different experimenter administered the procedure on Day 2. 3. Reminder question. Participants in the reminder group were asked to describe what the experimental procedure of learning Set 1 right before learning the second list. For participants in the no-reminder group, the experimenter did not ask what had happened in Session 1. In order to disentangle these three components, Hupbach et al. (2008) manipulated them independently: one group studied Set 1 and 2 in the same spatial context, another group worked with the same experimenter in Session 1 and 2, and a third group was asked a reminder question in Session 2 before learning Set 2. Interestingly, only the spatial context triggered updating: when Set 1 and Set 2 were learned in the same room, Set 2 objects intruded into Set 1. The other two groups showed very little intrusions (see Fig. 1). This shows that memories associated with a spatial context are automatically reactivated when participants return to this context. Memories can then be updated by incorporating new information. When brought to a novel context, new learning is attached to a different scaffold, i.e., an entirely different episode is created. 50 Mean percentage of objects recalled 1060 Recall (Set 1) Intrusions (Set 2) 40 39.6 30 20 34.6 31.8 20.6 10 4.5 0 Context Experimenter 6.0 Question Dynamics of Memory: Context-Dependent Updating. Fig. 1 Mean percentage of objects correctly and falsely recalled in the different reminder groups. Error bars represent standard errors of the means. Note: Subjects were asked to recall objects from Set 1. Objects that were falsely recalled from Set 2 are labeled as intrusions Dyscalculia This research raises several important questions. For example, what aspect about the spatial context triggers memory reactivation and permits subsequent updating? It seems to be the actual presence in the spatial context, since neither briefly revisiting the spatial context, nor recalling it in a new environment, is sufficient for inducing this form of memory malleability. Future studies have to reveal whether the context effect is tied to the repetition of the perceivers perspective of the context (egocentric coding in which locations are represented with respect to the particular viewpoint of a perceiver), or whether the perceiver-independent overall layout of the context is sufficient (allocentric coding in which locations are represented in reference to other locations and are thus independent of the perceiver). Given the importance of the hippocampus for both the allocentric coding system and episodic memories, one can hypothesize that allocentric coding carries the effect. Are there boundaries on the ability of spatial context to act as an agent for episodic memory change? Familiarity of the spatial context constitutes one such boundary. In both studies mentioned above, participants were tested in a room they had never visited before, hence the context was unfamiliar. Familiar contexts are associated with a variety of different experiences. Given the risk of overall modification, it would be counterproductive to reactivate all experiences associated with a particular context each time it is revisited. Hupbach et al. (in press) explored the role of context familiarity for memory updating in a sample of 5-yearold children. Using the paradigm outlined above, they tested children either in an unfamiliar setting, or they visited the children at home in their familiar environment. When the context was unfamiliar it triggered memory updating, thus replicating the findings with adults. Critically, when tested at home, the spatial context did not cause memory updating, supporting the hypothesis that a familiar context does not serve as a reminder for a specific episode. Interestingly, the experimenter and the reminder question - the two reminder components that did not contribute to the reminder effect in an unfamiliar context – became effective in a familiar context. One can conclude that memory updating following reactivation is a relatively ubiquitous phenomenon, but reactivation is initiated by different cues depending upon the D specific situation. While an unfamiliar spatial context appears to “overshadow” other cues, those cues can initiate memory updating in familiar spatial contexts. Determining which reminders are effective in which situations will not only be of theoretical importance but will also have important practical implications. In some situations, such as in eyewitness testimonies, the goal is to prevent a modification of the memory for the witnessed event. In educational contexts, however, memory updating is desirable, and defining the optimal conditions under which new information is incorporated into previous knowledge could be of great value to educators. Cross-References ▶ Adaptive Memory and Learning ▶ Memory Codes (and Neural Plasticity in Learning) ▶ Memory Consolidation and Reconsolidation ▶ Memory Persistence References Hardt, O., & Nader, K. (2009). A single standard for memory: The case for reconsolidation. Nature Review Neuroscience, 10, 224–234. Hardt, O., Einarsson, E., & Nader, K. (2010). A bridge over troubled water: Reconsolidation as a link between cognitive and neuroscientific memory research traditions. Annual Review of Psychology, 61, 141–167. Hupbach, A., Gomez, R., Hardt, O., & Nadel, L. (2007). Reconsolidation of episodic memories: A subtle reminder triggers integration of new information. Learning & Memory, 14, 47–53. Hupbach, A., Hardt, O., Gomez, R., & Nadel, L. (2008). The dynamics of memory: Context-dependent updating. Learning & Memory, 15, 574–579. Hupbach, A., Gomez, R., & Nadel, L. (in press). Episodic memory updating: The role of context familiarity. Psychological Bulletin & Review. Dyscalculia ▶ Mathematics Learning Disability ▶ Socioemotional and Academic Adjustment Among Children with Learning Disorders 1061 D 1062 D Dyscalculia in Young Children: Cognitive and Neurological Bases Arithmetic learning difficulties; Developmental dyscalculia; Math learning disabilities because they do not depend on formal education and can be assessed relatively early in life. Core number abilities are thought to be innate and scaffold other aspects of numerical cognition. The non-symbolic magnitude representation system is often characterized in terms of a mental or a spatial number line (MNL). An intact MNL representation is hypothesized to support the development of cardinal and ordinal number concepts, and deficiencies in it will likely affect the development of the latter concepts (Butterworth 2005). A defensible definition of DD awaits a better understanding of the long-term significance of individual differences in core number competences. Definition Theoretical Background Dyscalculia (usually referred to as Developmental Dyscalculia – DD) is a specific learning deficit associated with difficulties understanding numerical and arithmetic concepts. DSM IV suggests prevalence rates of 2% for DD; however, more recent estimates suggest 6.5% or above (Butterworth 2010). Children with DD have difficulty acquiring number concepts, exhibit confusion over math symbols, lack an intuitive grasp of numbers, and have problems learning and remembering number facts. DD should be distinguished from acalculia which is acquired later in life, often as a result of neurological insult; however, DD is also thought to reflect neurological dysfunction. It may or may not be comorbid with other specific or general intellectual difficulties (e.g., dyslexia, IQ). Rubinsten and Henik (2008) suggest at least two DD subtypes can be defined – pure DD and a comorbid DD/dyslexia form – both of which may have different etiologies. With some exceptions, diagnosis of DD, and ipso facto its definition, depends on computation test performance, which means that a formal diagnosis is delayed until after the beginning of formal education. Moreover, the fact that a diagnosis is frequently based on an arbitrary cut-point on standardized test performance (e.g., below the tenth percentile) is of definitional concern. Little is known of DD’s origins or its manifestation in the infancy or preschool periods. There is much speculation about whether pre-symbolic abilities are early indices of DD. Some researchers claim that DD’s origins lie in core number deficits (i.e., deficits in nonsymbolic approximate magnitude and/or small quantity representations). These abilities are of interest The term “developmental dyscalculia (DD)” was first invoked by Kosc in the 1970s to characterize a range of children’s arithmetic difficulties, but not their underlying cause. In the 1980s and 1990s, DD was conceptualized in terms of limitations in general cognitive functions (working memory, semantic associations etc.); however, since the mid-1990s DD researchers have tended to appeal to domain-specific, neurological explanations for differences in numerical cognition (Nieder and Dehaene 2009). Some researchers question domain-specific claims, and continue to argue instead that DD reflects general processing deficits. Nevertheless, linking neurological areas activated by behavioral measures has resulted in theoretical advances. The possibility that arithmetic difficulties have an organic basis was first suggested by Gerstmann in the 1930s who claimed that acalculia, finger agnosia, left– right disorientation, and agraphia reflect a common neurological insult (brain lesions in the angular and supramarginal gyri near the temporal and parietal lobe junction). Although the status of Developmental Gerstmann’s syndrome is controversial, the same difficulties may be associated with DD. Functional neuroimaging confirms that specific brain areas are activated for processing numerosities and continuous quantity (Butterworth 2010). These areas are neuroanatomically distinct from regions serving general executive functions (Nieder and Dehaene 2009). The horizontal intra-parietal sulcus (HIPS) is activated by non-verbal quantity representations, analogous to a spatial map or MNL (Wilson and Dehaene 2007). The bilateral IPS is involved in symbolic and non-symbolic magnitude comparison and areas of the rTPJ are activated in Dyscalculia in Young Children: Cognitive and Neurological Bases ROBERT A. REEVE, JUDI HUMBERSTONE Developmental Psychology, Psychological Sciences, University of Melbourne, Melbourne, VIC, Australia Synonyms Dyscalculia in Young Children: Cognitive and Neurological Bases small, precise number enumeration (subitizing). The rIPS is activated by simple calculation tasks; however, a much larger neural network is activated for more complex calculation that involves the frontal lobe, especially on the left, and the left angular gyrus. On the basis of twin studies, DD appears heritable. Analyses of atypical genetic groups suggest a possible locus on a part of the X chromosome, though this does not mean that all cases of dyscalculia are necessarily inherited or associated with the X chromosome. Important Scientific Research and Open Questions Research Related to Behavioral Outcomes Behavioral indices of DD in young children include poor (1) number sense, (2) visuospatial abilities, (3) arithmetic fact retrieval, and (4) computation ability. Young children with a poor number sense tend to be unable to apprehend small numbers of dots (n < 3) without counting (i.e., are unable to subitize), and their non-symbolic and symbolic magnitude comparison RT signatures often differ from those of non-DD children. Both of these abilities are thought to be associated with the acquisition of counting and computation skills. Landerl et al. (2004), for example, found that compared to non-DD children, those with DD (1) exhibited atypical dot enumeration and number magnitude comparison abilities, (2) had similar working memory, language, and IQ, and (3) were substantially less accurate on simple addition and subtraction tests. Whether non-symbolic magnitude representations scaffold symbolic magnitude representations is a matter of theoretical importance. Rousselle and Noel (2007) compared the symbolic and non-symbolic magnitude judgment abilities of DD children with and without comorbid reading disabilities, and non-DD children. The DD children performed more poorly on the symbolic magnitude judgment task, but not the non-symbolic magnitude task. However, Price et al. (2007) found that DD children have an impaired non-symbolic magnitude system. DD and non-DD children judged which of two simultaneously presented sets of squares was larger. The comparison sets were either similar (one to three squares different) or dissimilar (five to eight squares different). Compared to non-DD children, those with DD were less accurate, D had slower RT’s and steeper distance effects. Significantly, Price et al. found an association between nonsymbolic magnitude judgment and neuroanatomical activation (fMRI). They also found that non-symbolic magnitude abilities predicted arithmetic abilities (see De Smedt et al. 2009 for similar findings). These findings suggest that, compared to non-DD children, those with DD exhibit different non-symbolic magnitude judgment RT signatures, neurological activation patterns, and poorer arithmetic abilities. Relatively little is known about the emergence or development of non-symbolic magnitude representations. We know that magnitude judgment abilities are evident early in life and become more refined in infancy. Izard et al. (2009), for example, showed that 2-day-olds react systematically and respond to quantity differences across different modalities and formats (sequential vs simultaneous). And magnitude judgment ability becomes more refined in the first year of life (Mussolin et al. 2010). However, the long-term significance of differences in early non-symbolic magnitude representation ability is unknown. Insofar as the symbolic number system (Arabic symbols) is supported cognitively by the non-symbolic magnitude system, deficits in the latter are likely to affect deficits in the former and ipso facto the development of numerical cognition. Magnitude representation ability may be manifest in different ways. The observation that similar brain areas are activated in calculation and manual hand tasks (Butterworth 2010; Gracia-Bafalluy and Noël 2008), and that finger agnosia is associated with DD, has led to the claim that finger representations may serve as a link between non-symbolic and symbolic magnitude representation (Fayol and Seron 2005). Interest in core number abilities has tended to overshadow other equally important work on the development of children’s numerical cognition. Contemporary interest in the origins and development of number owes much to Gelman and Gallistel’s (1978) work. They suggested that a preverbal counting mechanism guides the acquisition of verbal counting and that, learning to count involves, in part, mapping preverbal numerical magnitudes onto the verbal and written number symbols and the mappings from these symbols to preverbal magnitudes. The relevant capacities for arithmetic must be able to represent numerosities of sets independently of the properties of the objects in the set. Capacities for number must be able to establish 1063 D 1064 D Dyscalculia in Young Children: Cognitive and Neurological Bases the numerical equivalence of two sets through one-toone correspondence, and to distinguish transformations that do and do not affect numerosity. These requirements constitute benchmarks against which to evaluate theoretical accounts of the foundational capacities supporting the development of arithmetic (Butterworth 2010). Recently, Hannula and Lehtinen (2005) have found individual differences in preschoolers’ tendency to spontaneously focus on number events (SFON), and that these differences predicted counting development. What underlies this tendency is currently unknown; however, the possibility that differences in the propensity to orient to number events might be an index of DD is of practical and theoretical significance. Research Investigating Brain Dysfunction Price et al. (2007) findings suggest that neural differences exist in DD children. Reduced gray matter has been observed in the right and left IPS in DD children (Butterworth 2010). Activation differences in nonsymbolic number comparison in children have also been observed in the right IPS, and symbolic abnormalities in the left IPS. The reason for these apparently conflicting findings is not clear. One possible reason is that the organization of numerical activity changes with age, shifting from right dominance to left dominance as representations of numerosity becomes associated with language. Further, it is likely that residual specialization in the two parietal lobes, with the right specializing in subitizing and estimation, and the left in symbolic processing and calculation undergo development (Butterworth 2010). Longitudinal studies that combine neuroimaging with careful tests of basic numerical capacities may reveal different developmental trajectories depending on the locus of the neural abnormality (Ansari 2010). Open Questions/Issues Research into the origins and development of DD is in its infancy, and many issues have yet to be resolved. Here we list four interrelated issues. First, what is the relationship between differences in non-symbolic magnitude representation and the acquisition of symbolic magnitude representation? Insofar as precise symbolic number and computation abilities are scaffolded by a preexisting, approximate, non-symbolic magnitude system, it is important to understand how difficulties in the latter might affect difficulties in the former. Second, what is the nature of the changing developmental relationship between neurological mechanisms and numerical cognition? Third, do different developmental pathways underlie the acquisition of mathematical competence? Fourth, what is the relationship between a diagnosis of DD and remedial or intervention practices? Enormous progress has been made in identifying early indices of DD; however, we know next to nothing about effective, theoretically defensible, intervention practices. Cross-References ▶ Development and Learning ▶ Developmental Cognitive Neuroscience and Learning ▶ Individual Differences in Learning ▶ Mathematical Learning Disability References Ansari, D. (2010). Neurocognitive approaches to developmental disorders of numerical and mathematical cognition: The perils of neglecting the role of development. Learning and Individual Differences, 20, 123–129. Butterworth, B. (2005). Developmental dyscalculia. In J. I. D. Campbell (Ed.), Handbook of mathematical cognition (pp. 455–467). New York: Psychology Press. Butterworth, B. (2010). Foundational numerical capacities and the origins of dyscalculia. Trends in Cognitive Sciences, 14, 534–541. De Smedt, B. D., Verschaffel, L., & Ghesquière, P. (2009). The predictive value of numerical magnitude comparison for individual differences in mathematics achievement. Journal of Experimental Child Psychology, 103, 469–479. Fayol, M., & Seron, X. (2005). About numerical representations: Insights from neuropsychological, experimental, and developmental studies. In J. I. D. Campbell (Ed.), Handbook of mathematical cognition. New York: Psychology Press. Gelman, R., & Gallistel, C. R. (1978). The child’s understanding of number. Cambridge: Harvard University Press. Gracia-Bafalluy, M., & Noël, M.-P. (2008). Does finger training increase young children’s numerical performance? Cortex, 44, 368–375. Hannula, M. M., & Lehtinen, E. (2005). Spontaneous focusing on numerosity and mathematical skills in young children. Learning and Instruction, 15, 237–256. Izard, V., Sann, C., Spelke, E. S., & Streri, A. (2009). Newborn infants perceive abstract numbers. PNAS, 106, 106–109. Landerl, K., Bevan, A., & Butterworth, B. (2004). Developmental dyscalculia and basic numerical capacities. A study of 8–9 year old students. Cognition, 93, 99–125. Mussolin, C., Mejias, S., & Noel, M. (2010). Symbolic and nonsymbolic number comparison in children with and without dyscalculia. Cognition, 115, 10–25. Dyslexia Nieder, A., & Dehaene, S. (2009). Representation of number in the brain. Annual Review of Neuroscience, 32, 185–208. Price, G. R., Holloway, I., Rasanen, P., Vesterinen, M., & Ansari, D. (2007). Impaired parietal magnitude processing in developmental dyscalculia. Current Biology, 17, 1042–1043. Rousselle, L., & Noel, M. P. (2007). Basic numerical skills in children with mathematics learning disabilities: A comparison of symbolic vs non-symbolic number magnitude processing. Cognition, 102, 361–395. Rubinsten, O., & Henik, A. (2008). Developmental dyscalculia: Heterogeneity may not mean different mechanisms. Trends in Cognitive Sciences, 13(2), 92–99. Wilson, A. J., & Dehaene, S. (2007). Number sense and developmental dyscalculia. In D. Coch, G. Dawson, & K. Fischer (Eds.), Human behavior, learning and the developing brain: Atypical development. New York: Guilford Press. D 1065 Dysgraphia ▶ Language-Based Learning Disabilities ▶ Socioemotional and Academic Adjustment Among Children with Learning Disorders D Dyslexia ▶ Language-Based Learning Disabilities ▶ Socioemotional and Academic Adjustment Among Children with Learning Disorders