Quantcast

Teaching With Student Response Systems in Elementary and Secondary Education Settings: A Survey Study

August 2, 2007

By Penuel, William R Boscardin, Christy Kim; Masyn, Katherine; Crawford, Valerie M

Abstract This study examined how 498 elementary and secondary educators use student response systems in their instruction. The teachers all completed an online questionnaire designed to learn about their goals for using response systems, the instructional strategies they employ when using the system, and the perceived effects of response systems. Participants in the study tended to use similar instructional strategies when using the technology as have been reported in higher education. These include posing questions to check for student understanding and diagnose student difficulties, sharing a display of student responses for all to see, asking students to discuss or rethink answers, and using feedback from responses to adjust instruction. A latent class analysis of the data yielded four profiles of teacher use based on frequency of use and breadth of instructional strategies employed. Teachers who used the technology most frequently and who employed broadest array of strategies were more likely to have received professional development in instructional strategies and to perceive the technology as more effective with students. Keywords Student response systems * Teaching practice * Latent class analysis

Introduction

A number of reform initiatives have sought to provide mathematics and science teachers in elementary and secondary education with powerful new methods for improving instruction. Many reforms in these subjects have focused on improving student work in labs and small groups (Friedler, Nachmias, & Linn, 1990; Mokros & Tinker, 1987). Yet roughly a third of science instruction takes place as part of whole-class instruction (Martin et al., 2001). In mathematics, even though reform initiatives in schools often aim at increasing small group and class-wide discussion, a recent study of mathematics instruction found that whole-class, teacher-directed instruction still dominates in elementary schools (Rowan, Harrison, & Hayes, 2004).

Many researchers believe whole-class instruction still has an important role in learning (Klahr & Nigam, 2004; Schwartz & Bransford, 1998). Whole-class instruction provides teachers with multiple opportunities to provide feedback to students on their thinking (O’Connor & Michaels, 1996). Especially after exposure to a hands-on encounter with a new topic, students may be ready to learn from lectures by teachers on the concepts or for demonstrations that address students’ pre-conceptions about particular topics (diSessa & Minstrell, 1998). There may indeed be a “time for telling” when students can learn more from teachers’ exposition than from reading or exploring on their own (Schwartz & Bransford, 1998).

To date, researchers have paid little attention to scaleable innovations to improve whole-class instruction in K-12 mathematics and science classrooms (O’Connor, 1994; O’Connor & Michaels, 1996). At the same time, within higher education instructors have been developing a promising approach that uses classroom network technologies to promote students engagement in large lecture classes and to increase teachers’ awareness of students’ knowledge of scientific concepts. When the teacher poses a question in a lecture hall, all students respond to the question over the network. Students’ responses are anonymous and are immediately aggregated and displayed for the teacher and students, thus “making student thinking visible.” With response technology, teachers can integrate such questioning-with universal and immediate response from all students-into instruction and use the technology for a variety of purposes, such as elicitation of students’ initial ideas, formative assessment, instructional decision making, polling students about preferences and interests, and quizzing. Prior research suggests that-when combined with effective questioning, discussion, and feedback-classroom network technology constitutes a powerful catalyst for conceptual change, heightened student engagement in class, and, because involvement and feedback for all students is equal, greater equity in science instruction (Crouch & Mazur, 2001; Roschelle, Penuel, & Abrahamson, 2004).

An increasing number of K-12 teachers are using response system technology. Several commercial companies have begun to sell relatively low-cost systems to elementary, middle, and high school teachers. These companies have emphasized a variety of potential effects of response system technology to this level of educator: preparing students to perform well on standardized tests; increasing class participation from a variety of students; and enhancing formative feedback to the teacher on how students are doing. Their efforts have clearly been successful in convincing K-12 educators of the value of their product: some companies have sold thousands of units to K-12 customers in every grade from K to 12.

Although a number of researchers have studied student response systems in higher education, there has been very little research at the K-12 level. There are examples of studies that show promising effects on achievement (Robinson, 2002), as well as case studies that suggest promising applications in areas such as mathematics (Hudgins, 2001) and reading (Hartline, 1997). At the same time, case study researchers have also raised questions about how feasible it is to implement response systems in smaller classes (Means & Olson, 1995). Only recently, as wireless networks have begun to enable more diverse forms of student participation and learning in class, have K- 12 researchers in mathematics and science education sought explicitly to develop models of teaching and learning in the networked classroom (Hegedus & Kaput, 2004) and to study their effects on student learning (Hegedus, 2003; Lonsdale, Baber, & Sharpies, 2004; Wilensky & Stroup, 2000).

One important goal for advancing understanding of student response systems in K-12 settings is to learn about teachers’ purposes for using them and to analyze the teaching strategies they use in conjunction with the technology. A research base in higher education already exists that can guide such a study, and an understanding of teachers’ goals and practices is critical to designing appropriate studies of impact. Experimental studies to answer the question, “Are student response systems effective in improving student achievement?” are necessary. But for such studies to succeed in providing a good answer to the question, researchers need first to identify measures tied to teachers’ goals and formulate hypotheses about what kinds of uses of response systems will produce effects.

Survey-based studies of teachers’ goals and instructional practices with response system technologies can provide researchers with data that can inform experimental research. Surveys administered to national samples of teachers in the past have provided researchers with a better sense of how prevalent particular practices are, and they have helped researchers identify an appropriate focus for subsequent experimental studies (Berends & Garet, 2002; Desimone & Le Floch, 2004). In this paper, we present results of our own survey study of K-12 teachers’ goals for using response systems, instructional practices with the systems, and perceptions of effects in their classrooms. These results are intended to provide a broad look at response systems in elementary and secondary education and to inform future impact studies.

Teaching with response systems in higher education

Researchers who have studied student response systems in higher education share a belief that the technology alone cannot bring about improvements to student participation in class and achievement; rather, the technology must be used in conjunction with particular kinds of teaching strategies. Researchers who have studied instructors’ uses of student response systems in higher education label many of the strategies described below as “constructivist” or “student-centered,” in that the strategies call for significant roles and responsibilities for students in the classroom in promoting a deep understanding of the subject matter (Dufresne, Gerace, Leonard, Mestre, & Wenk, 1996; Nicol & Boyle, 2003). Researchers believe these kinds of strategies are widely adopted in conjunction with response systems to support greater interactive engagement in class, even though early uses of response systems emphasized privacy of responses and feedback and behavioral objectives-such as pacing of lectures based on continuous student requests to increase or decrease pace (Judson & Sawada, 2002).

There has been much attention across studies to what teachers must do to teach effectively with response system technology (Abrahamson, Owens, Demana, Meagher, & Herman, 2003; Dufresne et al., 1996; Mazur, 1997). What constitutes effective use, however, depends upon instructors’ purposes for using response systems, and these purposes vary widely. A chief goal of many teachers is to promote greater interactive engagement with the subject matter (Draper & Brown, 2004). These teachers often expect the system to facilitate broader participation in class, both by having all students respond to their questions and by engaging them in discussion focused on those questions (Draper & Brown, 2004). Teachers often use response systems for assessment purposes to find out how well students know material they are teaching (Draper & Brown, 2004; Dufresne & Gerace, 2004). Sometimes, teachers use the data from student responses formatively to adjust their instruction (Dufresne & Gerace, 2004). A number of researchers have given particularly close attention to the important role of questioning in teaching with response systems, both in facilitating engagement and in diagnosing student understanding (Boyle, 1999; Dufresne & Gerace, 2004; Poulis, Massen, Robens, & Gilbert, 1998). To stimulate discussion, for example, researchers have suggested that questions that yield divergent student responses are more effective than those that are easy or lead all students to a single answer (Wit, 2003). For teachers using response systems to assess student learning, it appears that questions that elicit students’ pre-conceptions and that help teachers adapt their teaching to the needs of students are most effective (Draper & Brown, 2004; Wit, 2003). The timing of questions further shapes the nature of information an instructor gains about student understanding. Questions posed after a lecture or explanation can be used to check understanding (Dufresne et al., 1996). By contrast, posing questions before a lecture, tends to elicit pre-conceptions in ways that can be used to shape instruction (Dufresne et al., 1996).

Structuring opportunities for peer or whole-class discussion appears to be a critical aspect of promoting greater classroom interaction with response systems (Dufresne & Gerace, 1994; Judson & Sawada, 2002). Researchers have observed that discussion facilitates students’ thinking about alternate ways of thinking about a concept or problem (Dufresne et al., 1996) and aids in developing deeper student understanding of the meaning of concepts (Judson & Sawada, 2002). Explaining to peers is believed by some to be what makes it more effective in helping transform students’ misconceptions (Judson & Sawada, 2002). Response systems facilitate discussion by providing an anchor (aggregate responses on a shared display) and set of artifacts with which students can use to build knowledge (Truong, Griswold, Ratio, & Star, 2002).

Some researchers have also explored the effects of associating student responses to class grades. Some of these researchers report that this practice served as an incentive for students to participate (Burnstein & Lederman, 2001; Fagen, Crouch, & Mazur, 2002). Others, however, have argued that using the system primarily to grade students takes away from the learning environment (Dufresne & Gerace, 2004; Ganger & Jackson, 2003). Researchers at the University of Colorado, for example, have found that the heightened accountability placed on students makes some of them anxious (Ganger & Jackson, 2003). Students who were coming to class for the first time tended to be disruptive; many resented that participation in class could be more accurately measured by the response systems. The use of the system itself in class appears to exert some pressure for students to participate (Nicol & Boyle, 2003), and some researchers have suggested that a shared classroom display makes it hard for students to “hide” even though responses are for the most part anonymous (Nicol & Boyle, 2003).

Researchers acknowledge that not all subject matters lend themselves well to the kinds of factual and conceptual questions response systems are designed to accommodate best (Stuart, Brown, & Draper, 2004). Research in response systems has focused primarily on the domains of physics, engineering, and computer science, where the ability to give specific, accurate answers to conceptual questions is critical (Draper & Brown, 2004). Researchers have therefore argued that effective teaching with response systems will depend on scaling out from these subject areas to humanities and social sciences courses, where it may be useful to pose different kinds of questions to students, about such topics as their perceived interest or boredom in class or their perspectives on some social or historical issue (Anderson, Anderson, VanDeGrift, Wolfman, & Yasuhara, 2003; DiGiano et al., 2003; Piazza, 2002; Stuart et al., 2004; Sung, Gips, Eagle, DeVaul, & Pentland, 2004). A principle emerging from findings across a range of disciplines is that teachers need a broad array of questions mapped to their curriculum to make effective use of response systems (Dufresne & Gerace, 2004; Fagen, Crouch, & Mazur, 2002).

Bridging to teaching in K-12 settings: some initial expected practices

Based on our review of research on use of response technology in higher education, we anticipated that teachers’ goals for using response system would vary at the elementary and secondary level, just as they do in higher education. For example, some teachers might be under pressure to improve standardized test scores, and they may view student response systems as a means to help them diagnose how well students are likely to perform on end-of-year tests. They might also use the systems to prepare students to perform well on those tests. Such a trend toward use of educational technology for diagnostic assessment and test preparation is increasingly evident in the field today (Means, Roschelle, Penuel, Sabelli, & Haertel, 2003). Alternately, some teachers might want to use the technology to enhance student engagement, much in the way technology has been reported to improve motivation in elementary and secondary education in the past (Means & Olson, 1995; Sandholtz, Ringstaff, & Dwyer, 1997).

We also expected that many of the same pedagogical strategies used in higher education will also be used in elementary and secondary education settings. These strategies include posing conceptually focused questions, requiring students to answer questions, displaying student responses for all to see, and engaging students in discussion of their responses to teacher questions. Despite differences between university and K-12 students in their expected level of knowledge and skills, we anticipated that the same strategies would be used because the technological affordances of response systems seem to support convergence on these practices across a wide range of higher education settings (Roschelle et al., 2004). Even though practitioners have developed slightly different models of teaching practice-such as Peer Instruction (Mazur, 1997), Interactive Engagement (Hake, 1998), and Assessing to Learn (Dufresne & Gerace, 2004)-all of the models have common elements and emphases on questioning, displaying student responses, and discussing student responses.

We anticipated, too, though, that there may be differences that arise in teachers’ goals and practices for different subjects, different class sizes, and teacher characteristics. As has been found in higher education settings, for example, we expect that subject matter will make a difference in how teachers pose questions and structure discussions. And in contrast to higher education settings, we expected that the smaller class size may lead teachers to adopt different kinds of practices. However, because there has been so little empirical work to test this conjecture, we could not at the outset of our study speculate as to how class size might affect teachers’ goals and practices. Finally, we expected that there might be a relationship between teachers’ instructional philosophy and their approach to integrating student response systems into their instruction. Past research has found that teachers who have a more student-centered, constructivist philosophy of instruction are more likely to adopt new technologies in the classroom (Becker & Anderson, 1998a).

The current study

The current study presents an analysis of how teachers in K-12 settings use student response system technology. Through a survey of teachers who use one company’s technology with students in class, we sought to answer the following research questions:

* For what purposes do K-12 teachers use student response system technologies?

* Can we identify distinct “profiles of use” of response systems among teachers using these systems?

* If so, are such profiles associated with particular characteristics of teachers, classrooms, or professional development experiences?

* Do perceptions of the effects of response systems on teaching and learning correlate with particular profiles of use?

The study was part of a larger grant awarded to SRI International by the National Science Foundation. That grant had at its chief aim to help plan for a large-scale, experimental study of the effectiveness of student response systems in science education. The survey study was intended in part to support the planning of the research by helping establish the feasibility of conducting such a study in an elementary or secondary setting. Because so little research had been conducted at this level, researchers were concerned that teaching practice might not have matured enough to lend itself well to a formal experimental test. The study was also intended to help researchers build a model of teaching practice that might be used in a future study, if it proved feasible to conduct a study at the K-12 level.

Method

Study design

At this initial stage of work to address these questions, we relied on a largescale survey study of K-12 teachers. Survey research is often useful for establishing the prevalence and frequency of particular instructional practices (Desimone & Le Floch, 2004). Critics of survey research have suggested teachers are likely to respond to surveys in ways that are biased toward socially desirable respondents, but bias is low and reliability of teacher selfreport data is high with surveys that seek to measure frequency of behaviors and teachers’ use of particular pedagogical strategies (Caret, Porter, Desimone, Birman, & Yoon, 2001; Herman, Klein, & Abedi, 2000; Koziol & Burns, 1986; Mullens, 1998; Ross et al., 1997). Furthermore, studies have shown that there is a high correlation between observed and self-reported instructional practice and teacher experiences (Burnstein et al., 1995; Mayer, 1999; Smithson & Porter, 1994). The primary focus of our questions and analysis is on the kinds of response items regarding practice, objective background data, and teachers’ reported beliefs that have been found to yield valid information from surveys in the past. Although we did also ask teachers to report on perceived effectiveness of using response system technologies, we view these data as preliminary and recognize that they represent teacher beliefs, perceptions, and self-reports of practices, rather than objective data on instructional practice and student participation and achievement. These self-reported data, however, offer important information regarding how teachers perceive the links among goals, teaching strategies, and outcomes.

Technology

All survey respondents were current users of elnstruction’s Classroom Performance System. Among other capabilities, this system allows users to perform tasks that are considered essential by researchers in this area: pose questions in the system, collect and aggregate student responses, and display them in a histogram or other summary form. To use the system, a desktop or laptop computer is required, as are elnstruction’s response pads (“clickers”) and infrared- or radio-frequency enabled receiver unit.

Sample

A total of 584 K-12 teachers from schools and districts across the United States completed at least part of the online questionnaire. Of these 35.7% (n = 209) were elementary school teachers, 29.7% (n = 174) were middle school teachers, and 34.4% (n = 201) were high school teachers. The median years taught for the sample was 11 years. The teachers in the sample had been using the CPS for a median of two semesters at the time they completed the survey. Nearly all (94%) had adopted the CPS by choice, most often having been offered the opportunity to use it by someone else at no cost to themselves.

The teachers in the sample used the CPS systems for a variety of subjects, across different levels of the educational system. Table 1 shows how many teachers at each level reported using student response system technology for different subjects. The number is larger than the total number of teachers because some teachers use the system for multiple subjects.

As the table above indicates, elementary school teachers were most likely to use the CPS across subject matters, in all likelihood because they have responsibility for multiple subjects in the curriculum. The plurality of middle school teachers used the system for mathematics, with a substantial percentage of teachers also using the system for science and English/Language Arts. In high school, the plurality used the CPS for science, with significant percentages also using the system for mathematics and English/ Language Arts. Respondents who marked “other” were not asked to describe their use.

It is important to note that it is not possible to determine how representative this sample was of the population of teachers using response system technologies. CPS users represent just one group of users, and we chose elnstruction’s technology because of its popularity and because of their database of users. This database included approximately 1,000 users in districts and schools spread across the U.S. Again, despite the spread of users across the U.S., we cannot determine how representative the sample was of practices across the U.S. It is important to note that our analyses did not require a representative sample; instead, what was necessary was to obtain a high level of variability in both independent and dependent variables. Our sample proved adequate in this respect.

Instrument

The study relied on a single questionnaire divided into six sections developed by the researchers. Our team’s group analysis of the alignment of the instrument with the existing research base was a primary basis for establishing the content validity of the questionnaire. We also used pilot data from two teachers to establish that their responses to our questions were as intended. Reliability statistics are reported in the results section, and we calculated them for the actual survey responses.

The first section asked teachers to report how many years they had been teaching, the grade levels to which they were assigned, the subjects in which they used response system technology, number of semesters they had been teaching with the system, and access to professional development. There were two professional development items: one asked teachers to report how much training time they had received in the technical aspects of the system (response options were: O h, 8 h), and the second item asked about training in pedagogical aspects of the system (same response options). The final question in this section asked teachers to indicate the percentage of students eligible for free- or reduced- price lunch or from different cultural groups in classes where teachers used response systems.

The second section focused on specific uses of response systems. It included a question about how often teachers can access the system in their classrooms, how often they use it (less than once a month, once or twice a month, once a week, 2 or 3 days a week, or usually every day). This section also included a 17-item question about what specific pedagogical practices teachers employ when using the response system. Subquestions asked about teachers’ questioning strategies, use of displays, small-group discussion, whole-class discussion, and use of data for instructional decision making. For each, teachers were to indicate how often they engage in the practice when using the system (hardly ever/never, sometimes, most of the time, nearly every time).

The third section focused on teachers’ goals for using response systems. This section was comprised of a single, 12-part question in which teachers were to indicate how important particular goals were for them in using the system on a scale from 1 (very unimportant) to 7 (very important). We listed a range of purposes for using the system identified in the research, including formative purposes (aimed primarily at improving or adjusting instruction using feedback from the system) and summative purposes (aimed primarily at making judgments about student learning).

The fourth section asked teachers to report on their pedagogical beliefs. This section was intended to elicit the extent to which teachers endorsed more or less constructivist views of teaching in which students were expected to have central roles in peer and whole- class discussion. These were 16 researcher-constructed items adapted from (Becker & Anderson, 1998b) and designed to map onto potential uses of response systems, in which teachers were to respond on a scale from 1 (not at all true) to 5 (very true) for each statement.

The fifth section focused on perceived effects of using response system technology. Teachers were to indicate their level of agreement with each of 19 statements of possible effects reported in the research on student response systems. These include statements such as, “Students are more actively engaged in a CPS class than in others,” and “The CPS helps me tell if the students understand a concept.” The response options for each statement were: strongly disagree, disagree, neither agree nor disagree, agree, and strongly agree.

The sixth and final section asked teachers to indicate what science topics they taught as part of their instruction, if they taught science. Their responses to this item were not used in the analysis of data reported here.

Procedure

All participants were contacted via a company newsletter and company representatives to solicit their participation. Participants were given a 17-week period during the winter of 2004-2005 to complete the survey. SRI provided a $10 gift certificate to all respondents who completed a questionnaire; as a further incentive to complete the questionnaire, all respondents were entered in a drawing to win one of five CPS systems, donated by elnstruction. Winners were selected at random by SRI.

Data analysis

Analysis of teachers’ goals for using response systems, pedagogical practices, and perceived effects

We first performed a categorical exploratory factor analysis (EFA) to identify the factor structures for two sets of items: (a) items intended to elicit the salience of particular goals for using the system and (b) items intended to elicit the frequency of teachers’ use of particular pedagogical practices in conjunction with response systems. EFA provides the number of factors or underlying constructs as well as the patterns of the factor loadings. In order to determine the number of factors, we used combination of the Kaiser rule, the examination of the model fit, as well as the interpretability of the factor loadings. Promax rotation method was used to facilitate the interpretation of the factor loadings when the factors had moderate correlations (

Identification of profiles of use

We used latent class analysis (LCA) to identify distinct profiles of use of student response systems among the teachers who responded to our questionnaire. Latent class analysis (LCA) is a statistical technique that is sometimes described as the categorical analogue to factor analysis with categorical indicators and an underlying categorical latent variable. LCA can be considered a person- centered analysis where the goal of the analysis is to understand the similarities and differences in response patterns across individuals in the data set. Similar response patterns are then grouped together into general profiles of responses, defined by a certain set of item response category probabilities. Thus, there are a finite number of response profiles (much smaller than the total number of observed response patterns), each defining a latent class, and probability of each individual’s membership in each profile is computed. Modal profile class assignment is done by placing individuals in the profile classes for which they have the highest estimated probability of membership. Correlates of profile class membership can also be investigated within the LCA model or post- hoc, based on modal class assignment. Thus, LCA is a model-based approach to cluster analysis with categorical variables (see Zhang, 2004). The first step in an LCA is to determine the number of latent classes, K, that adequately summarizes the different response patterns on the observed items. Like EFA, there is no statistical procedure for testing the number of latent classes but there are several information-theoretic techniques for comparing models with varying numbers of latent class profiles. The Bayesian information criterion (BIC) is often applied to the problem of latent class enumeration (Schwartz, 1978). This index is based on a function of the model log likelihood with a penalty for the number of parameters estimated relative to the sample size. Comparing across models, the lowest BIC level indicates a preferred model. There is also an empirically based likelihood ratio test (LMR-LRT), developed by Lo, Mendel, and Rubin (2001) for finite mixture models, that has shown promise in latent class enumeration in the LCA setting based on preliminary simulation studies. For this index, each K-class model is compared to a (K-I) class model with a significant /7-value indicating a significant model improvement with an additional class. Summaries of the classification uncertainty such as entropybased measures are also used to evaluate model quality (Ramasway, DeSarbo, Reibstein, & Robinson, 1993). Entropy is a index ranging from zero to one with a value of one indicating perfect classification and a value of zero indicating classification no better than random assignment to latent classes. In addition to these information heuristics, the intended use of the resultant classes and other substantive considerations, such as the interpretability and face validity of the classes, should also guide the class enumeration process.

Once the number of classes has been selected, the final LCA model yields class-specific item response category probabilities and overall class proportions as well as estimated profile class membership probabilities for each individual. In the case of the CPS teacher survey, each CPS use item had four response categories. The class-specific item probabilities can be used to understand the character of the classes. The class proportions represent estimates of the profile class prevalence in the population from which the sample was drawn. The estimated profile class membership probabilities can be used to assign each case to his/her most likely latent class profile.

For this analysis, a post-hoc investigation of correlates of CPS use latent class membership was conducted based on modal class assignment. This analysis is somewhat less conservative because it does not account for the uncertainty of class membership and should be treated as a more descriptive and exploratory technique. However, with estimated classification precision as high as we found for the final model, there is likely to be little difference in the inferences regarding the possible correlates and there is an ease in describing and interpreting the relationships between the use profiles and covariates of interest that is not present when including such correlates simultaneously within the LCA framework.

Analysis of relationships between cluster membership and teacher and classroom characteristics, goals, perceived effects, and professional development experiences

After selecting a solution for the number of classes from our dataset, modal class assignment was made for each case based on the class probabilities with individuals assigned to CPS use latent classes for which they had the highest posterior class probability. The distributions of teacher characteristics, frequency of CPS use, and teacher perceptions were compared across the four latent classes. Using chi-square analyses, we examined whether there were significant associations between CPS use latent class membership and years teaching; the frequency of use of other kinds of computer technology in teaching; frequency of CPS use; level and subjects taught; experience with using response system technology; teaching philosophy (more traditional versus constructivist); the amount of professional development received on instructional strategies to use in conjunction with the CPS; and perceptions of CPS effects. Perception items were averaged according to the three dimensions suggested by the EFA and the benefit items were averaged into a single score. We also conducted post-hoc analyses to analyze whether or not differences between responses of pairs of profiles of response system users were significantly different from one another.

All latent variable analyses, i.e., all factor and latent class analyses, were carried out with Mplus version 3.12 (Muthen & Muthen, 2004). All descriptive analyses were conducted with SPSS version 12.0.

Results

We report on the results of three different kinds of analyses. First, we report on results of categorical exploratory factor analysis for each of the scales and subscales generated for our study: teachers’ goals, teachers’ practices and teachers’ perceptions of effects of using response systems. We used these data to inform the second set of results on which we report, the identification of latent class profiles or clusters of teachers on the basis of their response to our scales. We describe these profiles and their characteristics in the second section. In the third set of analyses, we analyzed the relationship between profile membership and background characteristics and reported outcomes.

Teachers’ goals for use

The categorical exploratory factor analysis suggested two fundamental types of goals for using student response systems (Table 2). The first goal may be described as a goal construct focused on improving learning and instruction, and represents agreement with such questionnaire items as “To promote student learning” or “To stimulate class discussion about an idea or concept.” The second construct is a goal construct that is related to assessing learning and improving teaching efficiency. This construct represents agreement with questionnaire items such as “To assess student learning (where the assessment counts toward grades)” and “To save time required for scoring formal or informal assessment.” Both of these goals are similar to ones researchers report teachers adopt in higher education settings and are consistent as well with reported outcomes in research on the use of student response systems in higher education settings.

Although particular teachers’ goals did vary as expected, the sample means and standard deviations for each of the items given in Table 2 indicate that teachers in the sample as a whole tended to value both instructional improvement and assessment goals equally when using student response systems. Increasing the effectiveness of instruction, as well as improving assessment and teaching efficiency, were rated as important goals by teachers. Teachers were somewhat less likely to report that they used response systems to help differentiate instruction; this result is not surprising, however, given that other applications of technology may be better designed to support this goal.

Table 2 above also displays the standardized item factor loadings- these can be understood as the correlation between each factor and the categorical response item. The table also shows the overall item reliabilities. These reliabilities are essentially the proportions of variance in each item explained by the latent factors. These values were high, ranging from .60 to .87, suggesting that the items in the Goals subscales were reasonably precise measures of the underlying constructs.

Teachers’ pedagogical practices

The categorical exploratory factor analysis suggested five different constellations of teaching practices that were associated with one another (Table 3). A first factor relates to assigning tasks to gauge students’ understanding of subject matter content. The items comprised in this construct include “Ask students to use the system to answer a multiple-choice question about the subjectmatter content you are teaching” (Ib) and “Use the CPS for review of content (e.g., as ‘quiz game’ or to check understanding or recall of previously covered material)” (Ic). A second factor can be described as posing diagnostic questions to elicit student thinking, often before students can be expected to fully understand a concept. These are “take the pulse” kinds of assessments, often conducted on the fly by teachers. This construct represents items such as “Use the ‘Verbal Question’ mode to pose questions that you pose On the fly’ and have students answer with the CPS” (2a) or “Ask students to use the system to answer a multiple-choice question at the beginning of class, before beginning the day’s lesson” (2b). A third factor pertains to the use of displays. Although they were only two items in the third factor, the items representing how the student responses are displayed to the students seemed to represent separate construct. A fourth factor relates to the use of discussion of student responses. This construct represents items such as (4a) “Ask students to discuss their answers with a neighbor or peer after registering an initial answer/ vote” and (4c) “Ask students to answer/vote again after discussing an answer with a peer.” The fifth construct pertains primarily to teachers’ use of data to adjust instruction. The items under this construct include “Decide to adjust your lesson plan for the next class session on the basis of how students responded to a question” (5a) and “Use the feedback from the CPS to make changes in your instruction during class” (5c). The overall reliabilities of the items were moderately high, ranging from .31 to .77. Two items related to whole-class involvement in conjunction with CPS use (“Ask students to identify themselves to the whole class as answering in a particular way” and “Facilitate a whole-class discussion of students’ ideas after displaying a distribution of student responses”) had a low factor loadings on all five constructs, suggesting they were poor items for measuring any of the subscales. Even though the first of these two items met the .30 standardized loading cut-off for one of the factors, it did so just barely and there was near equal size loading of .29 on one of the other factors. For this item, teachers reported “hardly ever” asking students to identify their responses to the whole class (M = 1.22, SD = .54, where 1 = hardly ever/never and 2 = sometimes). This is dissimilar to the whole-class discussion item where teachers reported “sometimes” using whole-class discussion (M = 2.23, SD = .656, where 2 = sometimes and 3 = most of the time) as a pedagogical strategy among teachers.

Table 4 shows how frequently teachers in the sample used the other teaching strategies when they employ student response systems in class. All of the specific strategies we queried were employed by K-12 teachers, but the levels with which teachers in the sample used particular strategies varied widely. In general, teachers were more likely to assign tasks designed to help them gauge students’ understanding of subject matter and to use feedback to adjust their instruction than they were to pose diagnostic questions or to engage students in discussing and reflecting on their answers to questions.

Perceived effects of response system use

Just as the goal and pedagogical strategy constructs are consistent with those reported in higher education, the constructs we derived from the factor analysis for perceived effects on classroom processes are consistent with reported effects of student response systems in higher education, just as we initially expected (Table 5).

Based on the promax rotation results of a three-factor model, we’ve generated three possible constructs to explain the factor loading patterns. Items loaded on one construct seem to represent effects related to student learning and how to monitor student progress. The items in this construct include “The CPS helps me tell if the students understand a concept,” and “I have betterquality information about students’ understanding through the use of the CPS”. A second construct seems to suggest items related to the motivational affordances of the classroom environment. The items under this construct include “Students are more actively engaged in a CPS class than in others” and “Students are more willing to think hard about an issue when questions are posed with the CPS”. Items loaded on a final construct seem to be related to all the items soliciting negative (opposite) response. This construct represents items such as “Class dynamics are not affected by the use of the CPS,” and “There is no advantage in using the CPS to help students build on their previous knowledge.” Table 6 below shows the mean ratings teachers assigned to the first two perceived effects constructs.

Profiles of use

We sought to identify distinct classroom profiles of use from teachers’ responses to the questions about the pedagogical strategies they employed when using the CPS. Our purpose in identifying profiles was first to determine whether there were distinct patterns of use that were evident from the data and to then examine whether particular use patterns correlated with other factors such as perceived effectiveness, teachers’ instructional philosophy, and classroom characteristics.

To select the number of classes for the final LCA model, a series of models with increasing class numbers was fit using all 21 of the CPS use items, each with four ordered response categories. Table 7 summarizes the fit criteria of one-class to five-class models. The model was not identified for six classes. The BIC was lowest for the four-class model, indicating that this latent class model was the best fit for the data. The entropy continues to improve with increasing class number. The LMR-LRT favors the three-class over the four-class model and the four-class over the five-class. Examining the specific class item probabilities, the four-class model was chosen as the final LCA because the further division compared to the three-class model appeared to delineate substantively relevant latent use profiles.

Table 8 below shows how frequently the members of different classes of use or profiles engaged in particular pedagogical practices. The rows are the factors we identified for pedagogical practices; the specific item response categories represent averages for individual items under each factor. In the columns, we list the classes and for each response category, the probability of people in that class giving that particular response.

Below we describe in greater detail in narrative form the different profiles of use and typical uses of response systems among teachers with each profile.

Class 1: infrequent user

Teachers with this profile of use tended to use the CPS rarely. When they did use the system, they tended not to use the full range of capabilities of the system or not to use a variety of pedagogical strategies in conjunction with use of the system. They rarely used data from the system to adjust their instruction. There were 63 teachers (12.7%) in the sample whose responses most closely resembled this profile.

Class 2: teaching self-evaluator

Teachers with this profile of use tended to use the CPS often, and they used the system primarily to gain feedback on the effectiveness of their own teaching. They usually used the system for summative assessment purposes, and less frequently for formative assessment purposes. They rarely involved students in peer discussion, and only sometimes used the CPS to prompt whole-class discussions. They occasionally used data from the system to adjust their instruction. There were 137 teachers (27.5%) in the sample whose responses were most consistent with this profile.

Class 3: broad but Infrequent user

Teachers with this profile of use tended to use the CPS somewhat less frequently than self-evaluators, but they used the system for a wider range of purposes. When they used the system, they used it for formative assessment to adjust instruction and summative assessment to make judgments about what their students had learned. They sometimes involved students in peer discussion, and sometimes they used the CPS to prompt whole-class discussions. They occasionally used data from the system to adjust their instruction. There were 173 teachers (34.7%) in the sample whose responses most closely resembled this profile.

Class 4: broad and frequent user

Teachers with this profile of use tend to use the CPS frequently, and they used the system for a wide range of purposes. When they used the system, they used it for summative purposes and for formative purposes. They sometimes or often involved students in peer discussion, and often they use the CPS to prompt whole-class discussions. They sometimes used data from the system to adjust their instruction. There were 125 teachers (25.1%) in the sample whose responses were most consistent with this profile.

Significance of class membership

After selecting the 4-class solution, modal class assignment was made for each case based on the class probabilities with individuals assigned to CPS use latent classes for which they had the highest posterior class probability. The distributions of teacher characteristics, frequency of CPS use, and teacher perceptions of effects were compared across the four latent classes. There were significant associations between CPS use latent class membership and the amount of professional development received on instructional strategies to use in conjunction with the CPS; pedagogical beliefs or philosophy with respect to the role of students in class discussions; the frequency of use of other kinds of computer technology in teaching; frequency of CPS use; and perceptions of CPS effects. Perception items were averaged according to the three dimensions suggested by the exploratory factor analysis and the benefit items were averaged into a single score. A detailed summary of these findings is given below.

Relationships between class membership and teacher characteristics

We found a significant association (chi^sup 2^ (6) = 14.74, p = .02) between frequency of use of other kinds of computer technology in teaching and CPS use latent class membership. Those using other computer technologies 1-2 times per month or less were most likely to be in the “infrequent user” class and least likely to be in the “broad and frequent user” class, suggesting a possible relationship between comfort with using technology in the classroom and overall CPS use. At the same time, there was no significant association between years of teaching and the CPS use latent class membership (F (3, 494) = 1.28; p = .28). Nor was there a significant association between the number of semesters the CPS was used and CPS use latent class membership (F (3,494) = 1.05; p = .37). Consistent with what we had anticipated at the outset of the study, we also found that teachers’ pedagogical philosophy was associated with their class membership. Teachers who adopted a view that students ought to have significant roles to play in directing classroom discussion were more likely to be members of the class of broad and frequent users (F (3, 479) = 11.45; p

Relationships between class membership and classroom characteristics

We had expected that subject matter and grade level might influence teachers’ decisions about how to use the CPS, since scholar-practitioners in higher education have often reported needing to adapt models developed in physics for students in other disciplines and at different levels of expertise. We were thus surprised that we did not find a significant relationship between subject matter and teaching practice or one between level taught (elementary, middle, or high school) and class membership. There was no significant association between subjects for which the CPS is used and CPS use latent class membership. There was no significant association between school level taught and CPS use latent class membership (chi^sup 2^ (6) = 6.90, p = .33).

Relationships between class membership and professional development experiences

There was no significant association between how much training was received related to the technical aspects of using the CPS and CPS use latent class membership (chi^sup 2^ (9) = 12.15; p = .21). However, we found a significant association (chi^sup 2^ (9) = 23.44; p = .005) between amount of professional development received on instructional strategies to use in conjunction with the CPS and CPS use latent class membership. Increased training corresponded to an increased likelihood of membership in both of the “broad and frequent user” classes and a notably decreased likelihood of membership in the “infrequent user” class.

Relationships between class membership and perceived effects

For each of the perceived effects and benefits we analyzed, we found a relationship between class membership and teachers’ perceptions. First, we found a significant association between perceptions of feedback on student learning and CPS use latent class membership (F (3, 461) = 35.87; p

Second, we found a significant association between perceptions of classroom environment and student learning and CPS use latent class membership (F (3, 461) = 10.98; p

Post-hoc comparisons using the Bonferroni adjustment found that overall, the broad and frequent users were significantly higher, on average, than the means of the other user profiles, with the exception of the mean self-reported improved classroom environment for teaching self-evaluators (p

Table 9 summarizes the significant and non-significant relationships between class profiles and teacher characteristics, classroom characteristics, professional development, and perceived effects.

Discussion

There was a remarkable similarity between teachers’ reported goals for using student response systems in the K-12 settings of survey participants and goals that researchers report higher education instructors adopt. K-12 teachers in our sample used student response systems for both assessment and instruction. These findings are consistent with research on student response systems in higher education, which emphasizes effects of improved assessment data for teachers and improved engagement and instructional experiences for students. These findings are also consistent with research conducted by other researchers in formative assessment, who emphasize that at its best, good formative assessment becomes seamlessly integrated with good instruction (National Research Council, 2001).

Our survey data did reveal that there was a significant difference among CPS users in their goals for using the technology. The exploratory factor analysis found a relatively lower correlation (r = .13) between items intended to tap formative assessment uses of the CPS and items intended to measure more summative assessment uses of the system. The factor analysis also indicated stronger correlations (r – .41) between items intended to tap formative uses of the CPS and items intended to measure whether teachers used the CPS in conjunction with peer discussion, a use associated more with instructional uses of the system.

The factor structure of survey responses regarding pedagogical uses was a particularly promising result from our study. Although the research in higher education settings has generated several models and descriptions of what teaching with student response systems looks like in these systems (e.g., Judson & Sewada, 2002), similar descriptions of K-12 systems use have not been developed. Furthermore, systematic measures of practice-self-report or observational-have not been developed for either setting. The fact that the models of teaching with response systems developed for higher education have been adopted by K-12 teachers is consistent with what we expected, but it is also striking, in that the widespread use of response systems in elementary and secondary education settings is a relatively new phenomenon. That such robust practices can emerge quickly is promising and suggests the scalability of practices that can improve student learning opportunities.

The profile of users who used the system for formative assessment and for engaging students more in discussion of the CPS is of particular interest for future studies, because of the significant correlations between membership in this class and several other variables in our analysis. Frequent, broad users of the CPS were much more likely to perceive the CPS as conferring a range of benefits to themselves and to students. Ideally, professional development would focus on helping teachers adopt the practices characteristic of this class of users, especially the use of peer and whole-group discussion. The fact that hours of professional development related to teaching strategies was correlated with profile use suggests that teachers’ opportunities to learn may in fact influence their adoption of strategies judged effective by researchers in higher education, but an impact study would be necessary to test this hypothesis.

Critics of educational technologies often report that teachers make infrequent use of computers to support instruction and that when they do use computers, they do so in ways that tend to support traditional instruction (Cuban, 2001). Our findings suggest that the systems may be relatively easy to learn, since technical training did not explain differences in uses of the system. They also suggest, though, that when teachers participate in professional development focused on how to teach in new ways with the technology, they do adopt practices that do much more than support traditional instruction. Increasingly, the importance of preparing teachers in this way specifically for curriculum integration is being recognized in educational technology research (Adelman et al., 2002; Becker, 1999; Kanaya, Light, & Gulp, 2005; National Center for Education Statistics, 2000). The fact that teachers in our sample who reported having received professional development in how to integrate response systems into their teaching made more use of the system in their instruction is consistent with this finding.

Finally, we found a significant relationship between class membership and teaching philosophy, with more constructivist teachers adopting the CPS more frequently and in conjunction with a broader range of pedagogical strategies with students. This finding, too, is consistent with earlier research on educational technology, which has found that teachers’ instructional philosophies, as well as their beliefs about students’ capabilities and about the role of technology, play a significant role in shaping how they use computers in the classroom (Becker & Anderson, 1998a; Jaillet, 2004; Means, Penuel, & Padilla, 2001; Windschitl & Sahl, 2002). Although teachers may be slow to change beliefs that are inconsistent with the idea that students’ ideas and thinking can and should help guide classroom discussion, providing teachers with examples of what happens when students do engage in productive discussions of ideas and encouraging them to try it in their classrooms may be an effective strategy (Penuel & Yarnall, 2005).

Limitations of the study

Our study represents a first attempt by researchers to investigate teaching with student response systems in K-12 settings. Most research has been conducted on how these systems are used in higher education (see Roschelle et al., 2004, for a review); our research was intended in part to explore the similarities and differences between K-12 and higher education uses of the technology. Although our sample was large, we cannot say whether it is a representative sample of teachers. Teachers volunteered to complete the questionnaire, and we had no means for selecting a sample systematically from a list of elnstruction customers to survey. Therefore, we cannot make claims about the population prevalence of particular teaching practices used in conjunction with response systems. However, as we have done here, we have attempted to show for the teachers who did participate in the survey different profiles of use and how they relate to perceived effects of using the CPS.

This study also does not allow us to determine what effects CPS use has on teaching and learning, or whether it enhanced learning in any of the classrooms from which teachers were surveyed. In our study, there was no independent measure of teaching practice or student learning; nor did we attempt to design an impact study with random assignment. Instead, we relied on self-report and a correlational approach to analysis in this study. Both independent measures and a more rigorous design would be necessary to make claims about the impact of using the CPS on student learning and engagement.

Finally, the findings from this study are likely to generalize to users of systems with similar functionality to the CPS, but may or may not generalize to response systems with different kinds of functionality. We chose the CPS as a system to study because it is broadly used and representative of a class of student response systems, in terms of its design and functionality. However, there are other types of systems that combine response system functionality with the ability to engage students in participatory simulations and modeling activities (Kaput & Hegedus, 2002; Stroup, 2002; Wilensky & Stroup, 2000). These systems enable different forms of student participation, and they require different forms of teaching from what have been documented in the literature on the use of student response systems in higher education. The teaching strategies that might be used with such systems have not been widely implemented to date, however, and are not as susceptible to measurement as were the practices we measured as part of our survey.

Conclusions and directions for future studies

From the survey study, we can conclude that many of the teaching practices that researchers report instructors in higher education use in conjunction with response system technology are also used at the K-12 level among our sample of teachers. As in higher education, teachers use response system technology for both instructional and assessment purposes. Many of them also use it to stimulate peer and classroom discussion. As in higher education, there is a sense that teachers believe both peer and classroom discussion are important to making the system more effective in the classroom. A survey study alone, however, cannot determine whether there exists a causal relationship between particular practices and outcomes. An experimental study with more robust measures of classroom practices and objective measures of student learning would be necessary to draw conclusions about impact.

The survey results can help to inform the design of such an impact study: we can generate some more specific hypotheses about teaching with response systems that could be investigated. First, we would hypothesize that students in classrooms where teachers use the systems frequently and in conjunction with a broad range of teaching strategies will benefit more than would students in classrooms where there are response systems but where those systems are used less frequently and only for summative purposes. We would hypothesize that the effects would be three-fold; there would be improved feedback to students, an improved learning environment (facilitated by shared knowledge of how well students understand material), and enhanced learning and engagement. Finally, we would also hypothesize that teachers need to receive professional development in how to teach with response systems in order to adopt the systems to a level involving broad, frequent use with students.

A number of researchers are now planning or beginning studies that will investigate these and other hypotheses about the impacts of student response systems in K-12 settings. Researchers at the Physics Education Research Group University of Massachusetts, the Ohio State University, and at SRI International are all among those involved with taking research on student response systems to the next level. These studies are, however, just now underway, and findings are not yet available. We view such research as critical for advancing our understanding of best practice and for supporting scaling efforts, because policymakers’ demand for research-based evidence of effectiveness is increasingly a pre-requisite for adoption of new technologies in K-12 settings. Because each of the investigators of these studies takes as a premise that professional development must focus on teaching with response system technologies, we are especially hopeful that there will be positive evidence of effectiveness where teachers engage in broad and frequent use of these systems with students in their classes.

Acknowledgments This material is based in part on work supported by the National Science Foundation under Grant Number REC-0337793. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. We thank Timothy Urdan of Santa Clara University and Louis Abrahamson of Better Education Foundation for their assistance with designing the survey, Willow Sussex for her management of survey data collection, Angela Haydel DeBarger for her help with an earlier technical report presenting preliminary analyses of these data, and Jeremy Roschelle for his guidance on the research project and helpful comments on the paper




comments powered by Disqus