La necesidad de integrar la tecnología, el diseño instruccional basado en la conducta y la gestión de contingencias: Una oportunidad para el análisis de la conducta

Douglas A. Johnson

Western Michigan University

Descargar / Download PDF

Abstract

 

Behavior analysis has a long history of relying on technology for its advances and in turn, behavior analysis provides a framework for approaching technological implementations in a pragmatic manner. One of the field’s earliest technological innovations was teaching machines, a development that is well-represented by today’s use of computer-based instruction in both education and training. However, effective technology such as computer-based instruction requires an alignment between an analysis of behavior and an analysis of instructional content. This paper will cover the early historical development of technology in behavior analysis and the need for behavior- based instructional design in coordinating technology with the science of behavior. The paper will also provide a brief overview of the behavior-based instructional design process proposed by designers such as Susan Meyer Markle.

Keywords: teaching machines, computer-assisted instruction, behavior-based instructional design

 

Resumen

 

El análisis de la conducta tiene una larga historia de apoyarse en la tecnología para sus avances y, a la vez, el análisis de la conducta proporciona un marco de referencia que permite aprovechar de manera pragmática las propuestas tecnológicas. Una de las primeras innovaciones tecnológicas del campo fueron las máquinas de enseñanza, un desarrollo que se encuentra representado actualmente por el uso de la instrucción asistida por computadora tanto en educación como en capacitación. Sin embargo, la tecnología efectiva como la instrucción asistida por computadora requiere una conjunción entre el análisis de la conducta y el análisis del contenido instruccional. El presente trabajo cubrirá la historia temprana del desarrollo de tecnología en análisis de la conducta y la necesidad de diseño instruccional basado en la conducta coordinando la tecnología con la ciencia de la conducta. El trabajo también describirá brevemente el proceso de diseño instruccional basado en la conducta propuesto por diseñadores como Susan Meyer Markle.

Palabras clave: máquinas de enseñanza, instrucción asistida por computadora, diseño instruccional basado en el comportamiento

 

 Technology has long played an important role in the development of behavior analysis and the field has had an important relationship with technology in multiple ways. Technology can be viewed as the application of science to solve practical concerns. In this sense, the entirety of applied behavior analysis is a testament to how the field has successfully driven and continues to drive many important technologies. However, in regards to this article, technology is being restricted to the narrower usage that refers to machinery and equipment. An important relationship can be seen that is often reciprocal in nature, with machinery / equipment influencing behavior analysis and vice-versa.

Technology and the Early Days of Behavior Analysis

A demonstration of this relationship can be traced back to B. F. Skinner himself. Skinner was a person who played many roles throughout his career: a thoughtful philosopher, a bold revolutionary, a pragmatic psychologist, a comprehensive scientist, a cultural commentator, and most relevant to this paper, a person who loved technology. In both his personal and professional life, Skinner loved to tinker with and invent gadgets to solve the problems he saw around himself (Vargas, 2004). His paper titled “A Case History in Scientific Method” is notable foremost for his rejection of the standardized methods of the psychological community and calling into question the wisdom of popular formalized practices in general. However, it is also notable in how much technology molded his emerging science of behavior and led him to new discoveries (Skinner, 1956). In this paper, he articulated how he tinkered with research equipment to make his research activities easier. He crafted a new device to allow rats to deliver their own reinforcers, rather than have the experimenter deliver these individually. This automated apparatus was a circular device that rotated as it delivered food to the rats. Skinner tinkered further by adding a pen and a string to the device. As food was delivered the device would rotate, which in turn lowered both the string and pen downward slightly. This downward movement was recorded on a moving piece of vertical paper. Although the procedure was somewhat complicated, the record it produced was not. Periods of reinforced activity over time could easily be seen in the form of curves and as Skinner said, somewhat in jest, “science made great use of curves” (Skinner, 1956, p. 225). The first cumulative record was born and rate of responding suddenly became a practical and sensitive dependent variable. Soon the cumulative recorder became a mainstay of early behavioral research and Skinner frequently championed rate of responding as a fundamentally better measure for any science that concerned itself with behavior (Skinner, 1950; 1953). This new piece of technology helped shape the early directions in the experimental analysis of behavior and Skinner eventually lamented the loss of such a sensitive machine and measure in a later publication (Skinner, 1976). Other authors have even gone as far as to suggest this piece of technology and the measurement it permitted were among Skinner’s greatest achievements (Binder, 1993; Lindsley, 1991).

Teaching Machines: The Forerunner of Computer-Based Instruction

Not only was technology critical in the early stages of experimental behavior analysis, technology also played a central role in some of the first solutions applied by behavior analysts. During the 1930s and 1940s, Skinner appeared to be waiting until his science of behavior was more complete before implementing his findings to societal concerns. Although his Walden Two utopian novel suggested some future goals for the science (Skinner, 1948), Skinner seemed largely content to investigate basic phenomena in his laboratory during this time period. Even in regards to educating his own daughters, Skinner stated that “I had never felt that I should interfere in their schooling” (Skinner, 1983, p. 64). That position appeared to change with a Father’s Day visit to his daughter’s classroom in 1953. The easily observed inefficiencies and ineffectiveness of the classroom added a new urgency to his research agenda and he began tackling applied issues more directly and frequently. Not surprisingly, Skinner saw the problems of education in terms of contingency management (i.e., providing effective antecedents and consequences for the targeted behavior). In regards to antecedents, instructional material was not systematically presented to guarantee a high probability of a correct responding. In regards to consequences, reinforcement for correct responding was far too infrequent in the classroom, through no fault of the teacher who is practically outnumbered by his or her students. Once again, technology proved to play a major role with a piece of equipment becoming Skinner’s first solution to this applied problem. Specifically, Skinner invented a teaching machine that he hoped would ease the burden of teachers while also improving the educational process. Skinner recognized that in almost any other field where there is an increased demand for production, mechanical aids tend to be developed to meet the increased demand (Skinner, 1958). In order to meet the demand for an increased production of educated people needed for skilled labor, he thought teaching machines should be developed to more efficiently and effectively educate individuals. Furthermore, Skinner thought teaching machines could improve upon many shortcomings seen in traditional instructional methods (Skinner, 1954; 1958). Although Skinner was not the first to suggest improving education through technology, his approach was one of the most comprehensive early attempts at automating parts of the educational process. For example, Sidney Pressey’s self-scoring machines preceded Skinner’s work by several decades (Pressey, 1926; 1927; 1932). However, Pressey’s testing machines were designed to primarily assess student comprehension on material being taught outside of the machine, whereas Skinner’s teaching machines were designed to function independently during the instructional process (Skinner, 1958). During his discussions on teaching machines, Skinner listed several criteria essential for effective instruction that must be considered when designing automated instruction (Skinner, 1954; 1958). First, the teaching device must induce active engagement with the material. Second, the instructional process should be learner paced, so that the machine proceeded only when the pupil was ready to proceed. Third, mastery criteria must be used to prevent the learner from progressing to subsequent material without first understanding the prerequisite material. Fourth, composing a response, rather than selecting a response from a multiple choice format, was argued to be the preferable response requirement. Fifth, effective instruction should provide immediate and individualized feedback to strengthen or correct responding, as appropriate. Finally, Skinner believed instructional materials and response requests should proceed in steps that were large enough to efficiently progress a learner’s repertoire but small enough that no learners should fail at any given step. Meeting all of the above criteria on a large scale would likely prove incredibly difficult, impractical, and cost prohibitive without technological intervention. Skinner’s teaching machines solved the educational contingency flaws noted earlier by ensuring that the automated prompts would likely evoke the correct student response and that programmed consequences consistently followed student behavior. Despite his influence, Skinner’s approach to improving instruction never seemed to take root. Teaching machines failed to achieve mass adoption and were never fully implemented in classrooms. It is critical to note that the demise of teaching machines had nothing to do with their educational effectiveness. Indeed, trial runs of teaching machines resulted in excited learners who readily learned new material very efficiently. All the tests of teaching machines were overwhelming positive, yet they failed to influence typical educational practices. Skinner placed much of the blame on traditional philosophies of education that suppose that improved teaching methods are not possible. However, his autobiography reveals another part of the answer: nearly ten years of frustrations and difficulties involving mass production as teaching machines were shopped around to various companies (Skinner, 1983). Overly cautious companies that repeatedly broke production promises or produced defective machines eventually diminished Skinner’s time and enthusiasm. Although other individuals found moderately more success than Skinner with distributing teaching machines (see Escobar and Lattal, 2011 for a detailed review), this success was short-lived. These machines were also plagued by issues such as pricing (too expensive), portability (too heavy for salespeople), philosophy (perceived dehumanization), and the primitive status of technology (limited innovation could occur with paper and turn knobs). Although crude by current standards, these types of teaching machines could be considered the 1950’s equivalent of modern computer-based instruction (CBI). It is worthwhile to point out that the production obstacles and technological limitations that Skinner and others experienced with teaching machines are less relevant to today’s CBI.

The Potential of Modern Computer-Based Instruction

CBI has the potential to meet all of the criteria that Skinner outlined for teaching machines and technologically enhanced instruction is much more accepted today than it was in Skinner’s time. CBI and other technological solutions are increasingly commonplace in K-12, higher education, and employee training (Blackwell, Lauricella, & Wartella, 2014; Renes & Strange, 2011; Saghafin, 2011). Behavior analysts are well suited to the task of refining the use of such technology and examples of behavior analysts doing such work can already been found in both education (Layng, Twyman, & Stikeleather, 2004) and business (Johnson & Dickinson, 2012). CBI has the potential to better facilitate contingencies since it can be programmed to continually monitor the behavior of learners and precisely deliver differential consequences. In workplace settings, it has been suggested that performance monitoring is one of the most important elements for optimizing employee behavior (Komaki, 1986). Although human monitors can tire or be inattentive, a computer program can continually be on task and responsive. CBI is potentially more cost effective once it is developed (although up-front development should not be underestimated), since the instruction can be continually reused at minimal cost (unlike human teachers and trainers). CBI can relieve teachers and trainers from the more tedious aspects of instruction, freeing their time for other important work tasks. The use of CBI also provides the opportunity for a finer level of data collection since automated recording can potentially collect data on an unlimited number of users and behaviors. In turn, this wealth of data can be used to refine subsequent implementations of CBI (Twyman, Layng, Stikeleather, & Hobbins, 2004). In their article summarizing 12 years of comparative research, Johnson and Rubin (2011) examined interactive computer-based instruction with adult learners and found that CBI was as effective as, if not better than, alternatives (e.g., textbooks, classroom lectures) in over 95% of comparisons.

Their review also highlighted a wealth of variables to be explored in future studies that were well-suited to a behavior analytic sensibility

The Failure of Modern Computer-Based Instruction

Although the potential for computer-based instruction to revolutionize education and business is tremendous, actual implementations have fallen short. Part of the reason for this failure may be due to the fact that most computer-based instruction tends to be designed to replicate traditional instruction (i.e., presentation of material to a relatively passive audience) rather than capitalize on CBI’s unique potential (Engelmann, 1992; Skinner, 1963). Note that even though the results of the Johnson and Rubin (2011) review tended to be positive for CBI, a very large volume of CBI publications were omitted due to the exclusionary criteria used. This was primarily due to a simple restriction to only include interactive forms of CBI (in keeping with Skinner’s guidelines). Unfortunately, most forms of CBI tend to be passive forms of presenting information. Even within the realm of interactive CBI, the quality and types of these interactions can vary widely. No form of technology is a magical solution. Teaching machines, computer-based instruction, or any other technology have the potential to make circumstances worse. Even a well-designed system for delivering consequences will fail in the absence of carefully considered instructional design. It is the enforcement of contingencies along with the quality of the instructional design for the program inside the machinery that matters.

The Need for Behavior-Based Instructional Design

Instructional design is a broad term that is often used loosely. In practice, it often used to describe the use of the ADDIE model (analyze, design, develop, implement, evaluate) or the development of learning objectives (Branch, 2009; Rothwell & Kazanas, 2008; Seels & Glasgow, 1998). Much of instructional design is dominated by cognitive and constructivist assumptions rather than behavior analysis. In this article the phrase behavior-based instructional design will be used distinguish a behavior analytic approach from other approaches to instructional design. Behavior-based instructional design is a systematic approach to identifying the critical variables capable of manipulation to produce efficient learning and the process of continual refinement of these instructional variables to improve environmental contingencies. Put differently, it is how designers can best establish different types of stimulus control for different types of performance outcomes. In contrast to other forms of instructional design, it is not rooted in learner traits or cognitions beyond the designer’s influence and it does not postulate unnecessary hypothetical processes or inferred structures as the primary explanatory models. Of course, philosophical assumptions and theories are not what the learner actually encounters in a learning situation, but differences in philosophy may lead to different solutions being proposed by the designers of those learning situations. When the burden of change is placed upon an internal cognitive process or structure, the proposed solutions tend to revolve around hoping that the internal processes self-correct, if possible (e.g., waiting until the learner is ready or the person has a mental breakthrough; simply accepting that some people are not well-suited for certain demands). When the burden of change is placed upon the observable sequencing of environmental stimuli, the proposed solutions tend to revolve around active intervention (e.g., find more effective prompts; design better consequence delivery). An illustration of the importance of behavior-based instructional design for behavior analysis can be seen with the data from Project Follow Through. Project Follow Through was a nationwide federal study comparing different instructional approaches for children and remains one of the most ambitious educational experiments ever conducted. The study is often used to demonstrate the clear improvements of the behavioral method known as Direct Instruction (DI), which was sponsored in the experiment by the University of Oregon (Binder & Watkins, 1990; 2013). However, another lesson about behavior analysis can be taken from Project Follow Through because Direct Instruction wasn’t the only behavioral model tested. There was also the Behavior Analysis model sponsored by the University of Kansas. For the sake of simplicity, this article will simply refer to these two behavioral approaches as the Oregon and Kansas models. Although they were both behavioral models, they had very different effects. The Oregon model (DI) produced sizable gains in basic skills, conceptual skills, and affective skills, whereas the Kansas model produced sizable gains in affective skills, small gains in basic skills, and negative gains in conceptual skills (Watkins, 1988). What accounts for the differences in outcomes for models with the same theoretical orientation? Although there were several differences, behavior-based instructional design may have been a key ingredient. Both models utilized differential consequences for performance, frequent monitoring of progress, and an explicit philosophical stance that the environment drives behavior (Engelmann, 2007; Watkins, 1997). However, unlike the Kansas model, the sponsors of the Oregon model used behavior-based instructional design to develop their own instructional materials. Put differently, the differences in outcomes could be attributed to the difference between consequence management alone versus consequence management with behavior-based instructional design. Behavior-based instructional design has long been remained one of the most important elements of Direct Instruction (Engelmann, 1992). Although management by consequences alone may produce some desirable gains, a complete analysis of the contingencies requires the identification of the most effective antecedents, in addition to effective consequences, for the targeted behavior.

The Forerunner of Behavior-Based Instructional Design

One of the earliest behavior analysts to fully recognize the importance of good instructional design for technological solutions was Susan Meyer Markle. Markle worked with Skinner during the late 1950s and helped edit his book on verbal behavior (Skinner, 1957; credited as “Susan R. Meyer” in the preface). She also wrote programs for and tested some of the teaching machines Skinner had been developing (Skinner, 1983; Zemke & Armstrong, 1997). As such, she was well-versed in the notion of examining different types of controlling stimuli in regards to verbal relations and the analysis of content for automated instruction. She spent the rest of her career writing extensively about behavior-based instructional design and the proper use of technology in instruction (Markle, 1967; 1969; 1987; 1990). She often emphasized that technology required analyses of both instructional content and learner behavior to be effective. Susan Markle, along with her husband Philip Tiemann, outlined processes to systematically teach different repertoire types in the most efficient way possible (Tiemann & Markle, 1990). They identified 10 basic learning outcomes and explicitly detailed the process for creating and testing for appropriate stimulus control (see Sota, Leon, & Layng, 2011 for an updated refinement of Tiemann and Markle’s learning types). Her learning types included simple motor responses, complex chains of responding, basic verbal discriminations, conceptual responding, novel behavior, and more.

Extended Example Illustrating the Process behind Behavior-Based Instructional Design

To illustrate what behavior-based instructional design might look like, an extended example showing how Tiemann and Markle (1990) systematically approached concept learning will be documented below. Often concepts are taught in an insufficient or haphazard manner. Definitions are simply provided or examples are selected at random. As a starting point, it is difficult to get people to even agree upon what a concept is. Tiemann and Markle spoke of a concept as a set of “things” that belong together. To be more explicit, a concept is equivalent to a stimulus class, that is, a group of stimuli that are treated as belonging to the same group (they all evoke a common response) due to some common property (Keller & Schoenfeld, 1950). Thus, conceptual learning can be viewed as a type of generalization and discrimination learning with stimulus classes. To say an individual has a concept is to say he or she responds similarly within the stimulus class and he or she responds differentially between stimulus classes. Concept learning is essentially discrimination and generalization training with stimulus classes rather than individual stimuli. Isolating the variables that determine whether or not something belongs within a stimulus class is often very difficult, even when interacting with individuals who are very accurate in their classifications. For example, people usually have no difficulty in deciding whether or not a stimulus is “human”. However, if asked for the criteria (i.e., variables controlling their verbal behavior) used when deciding whether or not a particular stimulus was “human”, people have difficulty verbalizing all the relevant discriminative stimuli. Application of the rules for any given concept can be well learned, even if the articulation of the rules is not. Conceptual learning involves more than simply providing definitions, even though it is commonplace to find instructional attempts to teach concepts through definitions alone (a cursory reading of most college textbooks and computer-based instruction can confirm this). For example, one could memorize that a response is “the neurally innervated action of an organism’s effector” (definition borrowed from Michael, 1993, p. 5). Even if an individual precisely provides this definition word for word on an exam, the response still has not demonstrated conceptual stimulus control (although rote memorization of definitions may not be a bad starting place, especially if the definition functions like a checklist or job aid). As Tiemann and Markle (1990) point out, conceptual learning requires the classification of multiple examples and nonexamples using both novel and familiar stimuli. One could randomly choose a bunch of examples and nonexamples for teaching a new concept. To teach the concept of Beethoven-like classical music, one could utilize multiple Beethoven songs as well as songs from jazz, rock, and hip-hop. One problem remains: How does one decide if the best examples and nonexamples have been selected for demonstrating accurate conceptual stimulus control? A nonexample drawn from the classical music of Mozart may provide to be a finer grade level of conceptual understanding since the sounds of jazz, rock, and hip-hop are very distinct from Beethoven’s music. As such, Mozart’s music would be a better nonexample than Miles Davis’ music for demonstrating that one understands the difference between Beethoven’s music and non-Beethoven’s music. Some nonexamples provide better evidence of discrimination than other nonexamples. Also, not all examples are equally efficient in demonstrating accurate generalization within a concept. To address this problem, Tiemann and Markle (1990) recommend using what they called the Prototype Approach to efficiently select examples and nonexamples. The Prototype Approach is a strategy that begins with identifying a prototype. For example, if one wanted to analyze the concept of “bicycle”, he or she would first imagine the most generic, prototypical bicycle possible. Next, the analyst would examine the various features of the bicycle and try to remove or alter those features to discover what is and is not essential for evoking the response of “bicycle”. Remove one wheel and the stimulus becomes a unicycle. Add a third wheel and it becomes a tricycle instead (sidestepping the issue of training wheels). Having two wheels appears to be an essential element to the concept of bicycle. It is what Tiemman and Markle (1990) would call a critical attribute. To be more precise, a critical attribute is a feature that every stimulus within the conceptual class has, and if altered, causes the stimulus to be excluded from the conceptual class. Thus, every single stimulus labelled as “bicycle” will have two wheels. As soon as one changes the number of wheels, the stimulus no longer qualifies as a “bicycle” and becomes something else instead. Most concepts will have multiple critical attributes. Motorcycles and mopeds are nonexamples of bicycles, thus illustrating the importance of bicycles being human-powered (non-gas, non-electrical). Without foot pedals, the bicycle becomes a scooter, thus highlighting foot pedals as a critical attribute. Every single stimulus the verbal community labels as bicycle has these critical attributes and any stimulus lacking one of those attributes is not labelled as such. As an aside, Tiemann and Markle’s process for understanding concepts in terms of analyzing the critical attributes used by verbal community is very similar to Skinner’s proposal for understanding semantic relations (Skinner, 1945). Of course, there are some dimensions of a bicycle that can be changed without excluding it from the conceptual class known as “bicycle”. For example, the color of the bicycle can be changed while still retaining the verbal label of “bicycle”. Thus, a specific color is a non-essential feature of a bicycle. More specifically, it is what Tiemann and Markle would call a variable attribute: A feature of an example, that when changed, results in a new example that also belongs in the same conceptual stimulus class. If the prototypical bicycle had been red and the color was then changed to blue, a new example of a bicycle would be what remains. Many other variable attributes can be identified. Both small bicycles for children and large bicycles for adults qualify as bicycles. Although most bicycles have one seat, some will have more than one (i.e., twin bicycles). Bicycles can have a typical saddle-type seat or a reclined seat such as seen on recumbent bicycles. Although not typically seen, it is easy to imagine a bicycle without a seat (it probably wouldn’t be comfortable, but most people would still label it as a bicycle). Handlebars can be straight or curved without changing the “bicycle-ness” of the stimulus. Many recumbent bicycles feature handlebars on the side of the seats, but are still considered bicycles. Potentially, one could identify an almost unlimited number of variable attributes. For example, the number of spokes can vary, the typical storage locations, the length of the chain, etc. However, Tiemann and Markle (1990) did not recommend coming up with every single possible variable attribute, but to instead pick the attributes with the most salient properties. How does the analyst determine which attributes are the most salient? It is recommended to initially use one’s best judgment, followed by field testing the instruction with learners from the intended audience. If the learners are classifying examples as nonexamples due to some irrelevant feature, this suggests that the list of variable attributes needs to be expanded to directly address that irrelevant feature. Markle (1990) liked to state that it was impossible to tell if an instructional program would work just by looking at it. One must repeatedly test the program with the full range of learners from the intended audience and this lesson applies to computer-based instruction and other forms of instruction. Returning to the bicycle analysis, the following critical attributes could be listed:

  • Two wheels
  • Completely person-powered
  • Foot pedals

The following variable attributes might be identified (which would be refined later based on learner errors):

  • Color (red, blue, yellow, green)
  • Size (small, large)
  • Type of seat (saddle, recline)
  • Type of handlebar (straight, curved, side)

With the critical and variable attributes defined, close-in nonexamples can now be developed for instruction. A close-in nonexample is a nonexample that lacks one and only one critical attribute. Tiemman and Markle (1990) argued that close-in nonexamples are the best stimuli to choose when trying to teach discriminations. For example, a moped is a close-in nonexample for the concept of bicycle (two wheels and pedals, but missing person-powered). The “closeness” of a moped is what makes it a much better nonexample for teaching and testing the concept of bicycle than a car, apple, fish, or some other randomly/intuitively selected stimulus. This is also why Mozart was a much better nonexample than Miles Davis for teaching the concept of Beethoven’s music. For each and every critical attribute, one should develop a close-in nonexample. This relates to the idea of Tiemann and Markle’s Minimum Rational Set of Close-In Nonexamples. To create this set, the analyst needs one close-in nonexample for each critical attribute. Since the concept of bicycle has three critical attributes, the Minimum Rational Set of Close-In Nonexamples would require a set of three stimuli.

  • Stimulus with two wheels, foot pedals, but not person-powered (i.e., moped)
  • Stimulus with two wheels, person-powered, but no foot pedals (i.e., handcycle or scooter)
  • Stimulus with foot pedals, person-powered, but not two-wheeled (i.e., unicycle or tricycle)

The set of close-in nonexamples helps teach discriminations so that learners can correctly identify nonexamples. However, it is also necessary for learners’ responding to generalize, so that a full range of examples are correctly labeled. The Minimum Rational Set of Examples (note: this is different than the set of nonexamples referred to above) addresses this by varying every dimension on the variable attributes selected. The variable attribute with the most dimensions corresponds to the number of stimuli the Minimum Rational Set of Examples should have. In the list above, the variable attribute of color has the most dimensions (four), so this particular Minimum Rational Set of Examples would have four examples. In the process of constructing this set, it is important to have every variable attribute vary as much as possible. The set of examples might look like this:

  • A bicycle that is red, small, saddle seated, with straight handlebars
  • A bicycle that is blue, large, recline seated, with curved handlebars
  • A bicycle that is yellow, small, saddle seated, with side handlebars
  • A bicycle that is green, large, recline seated, with straight handlebars

Notice that every single variable attribute has been varied. Additional dimensions could be developed, such as more dimensions for the variable of color (black, white, pink, etc.), but it is not necessary to show every single possible dimension for a variable attribute. In fact, it would be an infinite list if one tried to vary every possibility for certain attributes, such as size. How variable the dimensions need to be will depend on how sophisticated the learners are. Only through empirical testing can it be discovered if there are too many or too few dimensions. All of the preceding information has been devoted to teaching concepts. The testing of concepts is another issue for consideration. Tiemann and Markle point out that one can only test for a concept using novel stimuli: examples and nonexamples that were not used during the teaching process. Otherwise, it is unclear if the learner behavior is showing evidence of conceptual stimulus control or if the learner simply got it correct due to rote memorization. Test stimuli with too much formal similarity would not be acceptable either (e.g., teaching the concept of conditioned response using the item “salivating at sight of cake” and then testing with the item “salivating at sight of cookies”). If learners correctly identify novel examples, they are demonstrating appropriate generalization. If learners correctly identify novel nonexamples, they are demonstrating appropriate discrimination. If learners incorrectly identify novel examples (by calling them “nonexamples”), they are demonstrating undergeneralization. If learners incorrectly identify novel nonexamples (by calling them “examples”), they are demonstrating overgeneralization. If it is discovered that learners often make overgeneralization errors during testing, the addition of more close-in nonexamples would be prescribed. If it is discovered that learners often make undergeneralization errors during testing, this suggests that more examples are needed. Markle (1990) would caution analysts to not overload instruction with too many examples and nonexamples. Doing so would only serve to make instruction tedious and inefficient (possibly evoking escape behaviors). This is why the process starts with the minimal number of examples and nonexamples to fully teach concepts and then adds additional material following thorough testing with real learners.

Where Behavior Analysis Needs to Go: The Synthesis of Technology, Design, and Consequences

By understanding instructional design from a behavioral perspective, technology can be better designed for the specific outcome that is desired. If one wants computerbased instruction to teach a concept as fully yet efficiently as possible, a systematic approach is necessary. Otherwise, errors in generalization and discrimination are likely to occur. The same holds true for using technology to teach any sort of differentiated responding under specified stimulus control or to create repertoires that are likely to extend successfully to novel situations. This can also provide the framework for understanding many types of behavior sometimes falsely argued to be outside the jurisdiction of behavior analysis, such as problem-solving and creativity (Critchfield & Twyman, 2014; Kubina, Morrison, & Lee, 2006). Efforts to harness new technologies without understanding behavior-based instructional design or behavior analysis in general are less likely to be successful since an understanding of the person using the technology is left out of the equation. Some progress has been made in merging these areas (for example, see the Headsprout development process as described by Twyman, Layng, Stikeleather, and Hobbins, 2004), but much more is sorely needed. It may be argued that behavior analysts are among the few people prepared to fully capture the potential of technology. This is because the field of behavior analysis is well-situated to understand 1) emerging technology in relation to human behavior, 2) behavior-based instructional design, and 3) management of contingencies. What is needed for the future is an integrative approach to align these areas in order to maximize practical success in areas such as education and training.

References

  • Binder, C. (1993). Behavioral fluency: A new paradigm. Educational Technology, 33, 8-14.
  • Binder, C., & Watkins, C. L. (1990). Precision Teaching and Direct Instruction: Measurably superior instructional technology in schools. Performance Improvement Quarterly, 3, 74-96. doi:10.1111/j.1937-8327.1990.tb00478.x
  • Binder, C., & Watkins, C. L. (2013). Article update: Precision Teaching and Direct Instruction – Measurably superior instructional technologies in schools. Performance Improvement Quarterly, 26, 96-115. doi:10.1002/piq.21145
  • Blackwell, C. K., Lauricella, A. R., & Wartella, E. (2014). Factors influencing digital technology use in early childhood education. Computers & Education, 77, 82-90. doi:10.1016/j.compedu.2014.04.013
  • Branch, R. M. (2009). Instructional design: The ADDIE approach. New York, NY: Springer Science.
  • Critchfield, T. S., & Twyman, J. S. (2014). Prospective instructional design: Establishing conditions for emergent learning. Journal of Cognitive Education and Psychology, 13, 201-217. doi:10.1891/1945-8959.13.2.201
  • Engelmann, S. (1992). War against the schools’ academic child abuse. Portland, OR: Halcyon House. Engelmann, S. (2007). Teaching needy kids in our backward system: 42 years of trying. Eugene, OR: ADI Press.
  • Escobar, R., & Lattal, K. A. (2011). Observing Ben Wyckoff: From basic research to programmed instruction and social issues. The Behavior Analyst, 34, 149-170.
  • Johnson, D. A., & Dickinson, A. M. (2012). Using postfeedback delays to improve retention of computer-based instruction. The Psychological Record, 62, 485-496.
  • Johnson, D. A., & Rubin, S. (2011). Effectiveness of interactive computer-based instruction: A review of studies published between 1995 and 2007. Journal of Organizational Behavior Management, 31, 55-94. doi:10.1080/01608061.2010.541821
  • Keller, F. S., & Schoenfeld, W. N. (1950). Principles of psychology. A systematic text in the science of behavior. East Norwalk, CT: Appleton-Century-Crofts.
  • Komaki, J. L. (1986). Toward effective supervision: An operant analysis and comparison of managers at work. Journal of Applied Psychology, 71, 270-279. doi:10.1037/0021-9010.71.2.270
  • Kubina, R. M., Morrison, R. S., & Lee, D. L. (2006). Behavior analytic contributions to the study of creativity. The Journal of Creative Behavior, 40, 223-242. doi:10.1002/j.2162-6057.2006.tb01275.x
  • Layng, T. V. J., Twyman, J. S., & Stikeleather, G. (2004). Selected for success: How Headsprout Reading Basics teaches children to read. In D. J. Moran & R. W. Malott (Eds.), Evidence-based educational methods (pp. 171-197). St. Louis, MO: Elsevier/ Academic Press.
  • Lindsley, O. R. (1991). Precision Teaching’s unique legacy from B. F. Skinner. Journal of Behavioral Education, 1, 253-266. doi:10.1007/BF00957007
  • Markle, S. M. (1967). Empirical testing of programs. In P. C. Lange (Ed.), Programmed instruction: Sixty-sixth yearbook of the National Society for the Study of Education (pp. 104-138). Chicago, IL: University of Chicago Press.
  • Markle, S. M. (1969). Good frames and bad: A grammar of frame writing. New York, NY: John Wiley & Sons, Inc.
  • Markle, S. M. (1987). Honor, power, riches, fame, and sex—What do we really want? Or, goals for the practitioner of learning technology. Performance & Instruction, 26, 29-31. doi:10.1002/pfi.4160260213
  • Markle, S. M. (1990). Designs for instructional designers. Champaign, IL: Stipes Publishing Company.
  • Michael, J. L. (1993). Concepts and principles of behavior analysis. Kalamazoo, MI: Association for Behavior Analysis.
  • Pressey, S. L. (1926). A simple apparatus which gives tests and scores—and teaches. School and Society, 23, 373-376.
  • Pressey, S. L. (1927). A machine for automatic teaching of drill material. School and Society, 25, 549-552.
  • Pressey, S. L. (1932). A third and fourth contribution toward the coming “industrial revolution” in education. School and Society, 36, 668-672.
  • Renes, S. L., & Strange, A. T. (2011). Using technology to enhance higher education. Innovative Higher Education, 36, 203-213. doi:10.1007/s10755-010-9167-3
  • Rothwell, W. J., & Kazanas, H. C. (2008). Mastering the instructional design process: A systematic approach. San Francisco, CA: Pfeiffer.
  • Saghafian, M. (2011). A critical review of research on technology-based training in business organizations. Research in Higher Education Journal, 10, 1-9.
  • Seels, B., & Glasgow, Z. (1998). Making instructional design decisions. Upper Saddle River, NJ: Merrill.
  • Skinner, B. F. (1945). The operational analysis of psychological terms. Psychological Review, 52, 270-277. doi:10.1037/h0062535
  • Skinner, B. F. (1948). Walden Two. Englewood Cliffs, NJ: Prentice-Hall, Inc.
  • Skinner, B. F. (1950). Are theories of learning necessary? Psychological Review, 57, 193-216. doi:10.1037/h0054367
  • Skinner, B. F. (1953). Some contributions of an experimental analysis of behavior to psychology as a whole. American Psychologist, 8, 69-78. doi:10.1037/h0054118
  • Skinner, B. F. (1954). The science of learning and the art of teaching. Harvard Educational Review, 24, 86-97.
  • Skinner, B. F. (1956). A case history in scientific method. American Psychologist, 11, 221-233. doi:10.1037/h0047662
  • Skinner, B. F. (1958). Teaching machines. Science, 128, 969-977. doi:10.1126/science.128.3330.969
  • Skinner, B.F. (1961). Why we need teaching machines. Harvard Educational Review, 31, 377-398.
  • Skinner, B.F. (1963). Reflections on a decade of teaching machines. Teachers College Record, 65, 168-177.
  • Skinner, B. F. (1976). Farewell, my lovely. Journal of Experimental Analysis of Behavior, 25, 218. doi:10.1901/jeab.1976.25-218
  • Skinner, B. F. (1983). A matter of consequences. Washington Square, NY: New York University Press.
  • Sota, M., Leon, M., & Layng, T. V. J. (2011). Thinking through text comprehension II: Analysis of verbal and investigative repertoires. The Behavior Analyst Today, 12, 12-20.
  • Tiemann, P. W. & Markle, S. M. (1990). Analyzing instructional content: A guide to instruction and evaluation. Champaign, IL: Stipes Publishing Company.
  • Twyman, J. S., Layng, T. V. J., Stikeleather, G., & Hobbins, K. (2004). A non-linear approach to curriculum design: The role of behavior analysis in building an effective reading program. In: W. L. Heward et al. (Eds.), Focus on behavior analysis in education, Vol. 3. Upper Saddle River, NJ: Merrill/Prentice Hall.
  • Vargas, J. S. (2004). A daughter’s retrospective of B. F. Skinner. The Spanish Journal of Psychology, 7, 135-140.
  • Watkins, C. L. (1988). Project Follow Through: A story of the identification and neglect of effective instruction. Youth Policy, 10, 7-11.
  • Watkins, C.L. (1997). Project Follow Through: A case study of contingencies influencing instructional practices of the educational establishment. Cambridge, MA: Cambridge Center for Behavioral Studies.
  • Zemke, R., & Armstrong, J. (1997). Timeless rules for good instruction. Training, 34, 55-60.

Inicia la discusión (0)

Parece que no hay comentarios en esta entrada. ¿Porqué no agregas uno e inicias la discusión?

Trackbacks y Pingbacks (0)

Abajo hay un recuento de los trackbacks y pingbacks relacionados con este artículo. Estos se refieren a los sitios que hacen mención o referencia de esta entrada.