Quantcast
Last updated on April 17, 2014 at 14:51 EDT

Improving Mathematics Skills of High School Students

December 29, 2007

By Springer, Robert Pugalee, David; Algozzine, Bob

Abstract: In U.S. schools, students must pass statewide competency tests to graduate from high school. In this article, the authors summarize the development and testing of a program implemented to improve the skills of students failing to “make the grade” on these high-stakes tests. District personnel randomly assigned twenty-eight students who previously failed the math test to participate in an experimental (Arizona Instrument to Measure Standards [AIMS]) or control math class. The AIMS group used a computerized tool to generate multiple-choice problems for students to practice the content of the state’s competency test. Eight AIMS students (57 percent) and two control students (14 percent) passed the retest. The outcomes offer promise for schools looking for evidence-based solutions to problems related to increasing numbers of students experiencing difficulties with high-stakes assessments. Keywords: competency tests, evidence-based solutions, high-stakes testing

Current assessment emphases in education and the drive for accountability, particularly in mathematics, necessitate that schools carefully consider how such measures reflect the mathematics skills and abilities of students. Twenty-six states have implemented or plan to implement mandatory assessment programs that will require students to demonstrate competency in mathematics and language arts to graduate from high school (Milou and Bohlin 2003). Another five states, although not requiring an exit exam, allow individual districts the option to implement such requirements. Twentytwo states that do not have exit exams require specific mathematics tests as part of an accountability plan.

The Center on Education Policy (Chudowsky et al. 2002) conducted an initial investigation to determine some baseline data for high school exit exams. The percentage of students who passed such tests on the first try varied considerably from 31 percent in Arizona to 91 percent in Georgia. More than half of the students in Alaska and California failed the test on the first try. In general students have multiple opportunities to retake the assessments with schools providing or requiring remediation. A view of elementary and middle grades mathematics performance raises concerns about the performance levels of generations of future students who will be required to successfully complete high school exit assessments. All states have developed or selected assessments that allow them to identify achievement levels for specific subjects, as determined by the states standards and definitions. For the 2000-2001 academic year, the percentage of U.S. students at or above proficient levels in elementary mathematics ranged from twenty-three to ninty-one and for middle school mathematics ranged from eleven to ninty-three (U.S. Department of Education [DOE] 2004).

Arizona is one example of how states are struggling to address unacceptable pass rates on the exit exams. Beginning with the graduating class of 2006, all high school students in Arizona must pass the Arizona Instrument to Measure Standards (AIMS) math test to graduate. They must also pass AIMS tests on reading and writing. On April 22, 2004, sophomores in the class of 2006 took the test for the first of five opportunities that they would have to pass it. Statewide, only 39 percent of tenth graders passed. Unless the mathematics skills required to pass the AIMS test significantly improve, the state will face a monumental crisis in the next few years (e.g., at least half of the students now in Arizona high schools will not graduate).

The No Child Left Behind Act (NCLB; 2001) set forth the value of scientifically based research as the standard for what makes a difference in America’s classrooms (cf. Paige 2003, 1) and ushered the search for validated practices into the minds of administrators and policymakers. Additionally, with the continuing press for achievement and educational outcomes, educators are searching for locally validated evidencebased instructional programs to improve achievement for students participating in mandatory, highstakes testing, and school-based evaluation research has become a popular base for “how-to” decision making in schools (Horowitz 2006; Ysseldyke et al. 2004). Accelerated Math (Renaissance Learning Systems 1998a; 1998b) is an instructional tool that can be used to help teachers match students’ skill levels to instructional targets, generate appropriate practice, provide corrective feedback, and monitor progress. In this study, we evaluate the effects of using this automated instructional system on the mathematics achievement of students at risk of continued failure on their state’s high school mathematics proficiency exam. We reasoned that this information would provide an informed, evidence-based grounding for decision makers interested in implementing the program in their schools.

Method

To address the problem of large-scale failure on the state test, district administrators implemented an experimental program in two high schools. Students participating in the program used computer- aided practice and tutoring to improve mathematics skills; the control group received supplementary instruction regularly provided to students who fail the statewide competency test. The model we evaluated is commonly used in school districts across the country.

Participants

A group of twenty-eight students, all of whom had failed the mathematics competency test, participated in the study. School personnel randomly assigned fourteen students to the experimental group and included fourteen students with matched pretest scores (M = 471.36, SD = 12.99) as a control group (who had no knowledge that they had been selected until two days before the exam). The gender distribution was similar (?2 = 3.59, df = 1, p > 0.05) across groups and all of the students were expected to complete school prior to adoption of the must-pass graduation requirement.

All participants were volunteers. As a motivator to encourage participation, school personnel offered them a senior privilege (all were juniors) of going off campus once a week to eat lunch. We joined students with the same test scores into pairs. We randomly placed one of each pair into the experimental (i.e., tutored) group and the control (nontutored) group. If a student designated as being in the experimental group declined to join the study, we removed the student and his or her control partner from the experiment and selected another pair. To provide backup students for the control group, we chose alternates who had the same test score as the student assigned to the control group. Originally we assigned twenty pairs of students to the experiment. Over the course of the year, six students in the experimental group left the school for various reasons. Each time a student in the experimental group left the school, we removed the student and his or her control partner, plus all alternates, from the study. Only one control student left the school and we substituted the first alternate.

Procedure

Students in the experimental program received supplementary instruction and tutoring using the Accelerated Math AIMS program for an entire school year, fiftyfive minutes per day and five days per week. Students in the class demonstrated mastery of the entire set of concepts included in the statewide test two weeks before retaking the AIMS test. All students participated in a systematic review of the targeted content for the two-week period prior to taking the test.

Intervention. The AIMS class used an approach Renaissance Learning developed and implemented in over seventy thousand schools nationwide to improve reading and mathematics skills. The program uses widely accepted content libraries of math concepts, called objectives, from first grade through calculus. In a typical class using Accelerated Math, the instructor selects about 150 concepts aligned to state standards. Although the content used in this evaluation was guided by local objectives, the program has been used in similar ways in other states. It “is flexible enough to allow students the opportunity to develop more advanced mathematics skills if their pace and understanding move ahead of others” and to help “teachers assign instruction that is matched to the skill development of the learner and monitors student progress toward mastery of math objectives” (Ysseldyke et al. 2004, 299). Much like other commercially available educational resources, consultants work closely with school personnel in implementing the broadly designed models to meet local expectations and needs.

Each student works on about five concepts at a time. The program prints out a Practice lesson consisting of six problems for each of the concepts being tested; the thirty, four-option multiple-choice problems are presented in random order. The student marks the answer in pencil on an optical sense card. When a practice assignment is completed, the student scans the card using a small optical scanner connected to a computer.

The program grades the student’s work, prints out a report showing any problems missed, then prints out the next practice assignment. If the student completes 80 percent or more of the problems for a concept correctly, the program begins assignments using the next concept on the list of those selected by the instructor. If the student’s performance is below 80 percent, six more problems for the concept are printed. Because many different problems for each concept exist and the problems are algorithm based, the student never receives exactly the same problem on a practice assignment. As part of the instructional process in both groups, teachers encouraged students to figure out why they missed the problem. If they could not do this, they were free to ask for help from the teacher or a tutor. If the student failed to master a problem set for a concept three times, the program provided information indicating that additional instruction was needed. In the experimental group, this typically involved the instructor or a tutor printing out a problem set and working with the student to teach the concept.

When students passed approximately five concepts, they were given a mastery test. Unlike practice tests, they were not allowed to receive any help on the mastery test. If they completed 80 percent or more of the problems for a concept correctly, they were said to have mastered the concept. Two weeks after a student mastered a concept, he received a review problem. Over time, students completed four such review problems for each concept they mastered. If they were unable to complete three out of four of the review problems correctly, the program provided additional information indicating a need for intervention. After further instruction, the students were retested. The process of actively monitoring and supporting instruction was a critical factor in the program and the computer- based program greatly facilitated it.

AIMS library. To use Accelerated Math to improve AIMS math skills, a library that tests the concepts for every standard required for the AIMS test must be used. When the AIMS class project began, the math standards for high school students included six core standards (see table 1) and related concepts. An examination of all of the libraries commercially available from Renaissance Learning indicated that problem sets for only about forty concepts of the seventy-six needed were in the system. Renaissance Learning worked with a team from participating high schools to create the other thirty-six problem sets that were needed for the class. Once they developed these problem sets, they assembled the entire collection of seventy-six needed for the AIMS class in a single CD for use in this study. Although the library used in this study was specifically designed for Arizona, the program and process are readily adaptable for use in other states and with other groups of students (Ysseldyke and Tardrew 2003; Ysseldyke et al. 2004).

Assistance and tutoring. The Accelerated Math program allows students to learn at their own pace. In addition, the program focuses on each student’s particular math deficiencies and every student receives individual attention on the basis of continuous monitoring of his or her performance. This creates a need for individual instructors who can be quickly overwhelmed dealing with a class of twenty students, each having difficulties with different concepts. Our experience indicates that for optimal use of the program, one instructor or tutor is needed for approximately every six students. Minimums of two adult tutors with good math skills were provided for every session of the AIMS math skills improvement class.

Incentive reward for passing the AIMS test. Two days before retaking the AIMS test, instructors told participating students in both groups that they would receive fifty dollars for passing it. This was done to create an additional incentive for students who did not need to pass the test to graduate.

Design and Data Analysis

A pretest-posttest matched control group design was the base for evaluating the effects of the experimental program. We used a two- factor analysis of variance with repeated measures to compare scaled scores for participating students. We calculated individual followup t-tests and effect sizes as appropriate (cf. Cohen 1969, 1988; Glass 1976; Glass, McGaw, and Smith 1981; Thompson 2006). We also compared mastery test performance using a chi-square comparison across groups.

Results

We show means, standard deviations, and analysis of variance summary table for AIMS scaled scores in table 2; significant occasion and group by occasion interaction (see figure 1) effects are indicated. Follow-up analyses indicated outcomes favoring the treatment group. Significant differences (t = -2.66, df = 26, p 0.05). The difference in pass or fail rates across groups was also statistically significant (?2 = 5.60, df = 1, p > 0.05). Of the fourteen students who attended the AIMS class, eight (57 percent) passed the test whereas only two of the fourteen (14 percent) control students passed the test.

Discussion

Although control of education in the United States is less centralized, the influence of documents from professional organizations, such as the National Council of Teachers of Mathematics, is evident in state educational policies (Ridgway and McCusker 2003). Such a major impact on curriculum at the state level allows for the development of computer-based assessments that exemplify these goals and can be easily aligned to state mathematics curricular frameworks. Continued feedback mechanisms and opportunities to define performance levels provide an environment in which desirable changes in performance can be promoted. Computerized programs allow for differentiated instruction that assists educators in meeting the individual learning needs of diverse classrooms (cf. Ysseldyke and Tardrew 2003).

In this study, we examined the effects of using a specialized remedial program grounded in evidencebased effective instructional practices (that is, increased practice of critical skills with continuous monitoring of progress and constant adjustment based on performance). Overall, students who participated in the experimental treatment demonstrated greater gains in overall performance in mathematics achievement as measured by their state’s critical competency test required for high school graduation.

Accelerated Math provides an instructional management system that allows students to spend more time reviewing concepts and skills to enhance their performance, resulting in positive growth in measured mathematics competence. Students who use such systems consistently demonstrate greater mathematics gains than control groups (cf. Ysseldyke et al. 2003). Such programs increase the use of time spent on the needed improvement of students’ mastery of computational and foundational mathematics skills and objectives.

The requirement to pass the AIMS math test is likely to create a crisis by the end of the 2006 school year unless something is done to increase the percentage of students passing the test. The AIMS math skills improvement course significantly increases the number of students who pass the test after initially failing it. It is likely that at least half of the students failing the test would pass a retest after taking the AIMS math course. We strongly recommend that high schools and secondary education programs implement this approach. This study demonstrates the positive impact of the program on students’ mathematics performance. These results are consistent with other studies involving the use of Accelerated Math. Ysseldyke et al. (2003) report significant differences in student performance on the Northwest Achievement Levels Test for the fourth- and fifth- grade students who used Accelerated Math in conjunction with the classroom mathematics program (Everyday Math) compared with students using the same mathematics curriculum without access to Accelerated Math (t = 3.32, df =1 96, p

The intervention we described in this study underscores the positive impact that such programs can have on student achievement. High school exit exams, in general, have been associated with positive effects on curriculum and instruction as well as students’ motivation and achievement (Gayler et al. 2004). High school exit exams appear to encourage school districts to cover more content included in state standards frameworks, to promote a stronger alignment between curriculum and instruction with standards, and to provide remedial and other special courses for at-risk students. This study demonstrates how programs such as Accelerated Math can be linked in positive ways to the instructional and curricular emphasis of high school exit exams so that students’ mathematics performance significantly improves. Using the Accelerated Math program, the teacher is still responsible for instruction and students complete assignments using pencil and paper. The differences achieved with technology are more a function of personalized practice and progress monitoring than presentations of content. Research shows, and virtually all educators agree, that academic improvement requires practice to reinforce skills being learned and continuous monitoring of progress to ensure appropriate areas are targeted for instruction. Unfortunately, the roles of practice and progress monitoring are often overlooked and misunderstood components of effective instruction. Setting aside time for student practice is not enough. Similarly, checking performance several times a year provides insufficient evidence for improving skills requiring more frequent attention. Practice must be personalized to each student’s individual ability level and immediately followed by informed feedback to ensure a high rate of engagement and success. It must also provide progress-monitoring evidence for teachers and other professionals to improve instruction and outcomes. The program we evaluated in this study was grounded in these principles, and administrators in the district considered the costs nominal. For example, the hardware and software to create the Accelerated Math tool served classes of up to thirty students. The system was networked to bring the cost per student down substantially. Although mathknowledgeable adult tutors are preferred to provide ongoing instructional support, if unavailable, peer tutors will suffice as a viable alternative. Peer tutoring can improve the amount of time spent on academics and student engagement on tasks and has positively impacted both mathematics and reading performance (Dufrene et al. 2005). Accelerated Math is based on a number of principles, including active and ongoing assessment of skill level and instruction matched to individual performance levels, personalized goal setting, optimal amounts of practice time, and corrective and supportive feedback. These fundamentals are widely recognized as essential to effective instruction (cf. Algozzine, Ysseldyke, and Elliott 1997) and our intention is not to suggest or advertise the program per se as the basis for improved performance. It is clear in the outcomes of this evaluation that the implementation of principles of effective instruction through the use of Accelerated Math was related to important changes in academic outcomes for students at risk of school failure (i.e., missed graduation requirement). Students who participated in this program demonstrated significant gains in mathematics achievement, which translated into higher levels of overall content mastery necessary for high school completion. Continued implementation appears warranted and subsequent research addressing the importance of key features of the program is justified to add to the growing body of knowledge on evidence-based interventions for students experiencing academic difficulties in the upper grades.

REFERENCES

Algozzine, B., J. Ysseldyke, and J. Elliott. 1997. Strategies and tactics for effective instruction. Longmont, CO: Sopris West.

Chudowksy, N., N. Kober, K. S. Gayler, and M. Hamilton. 2002. State high school exit exams: A baseline report. Washington, DC: Center on Education Policy.

Cohen, J. 1969. Statistical power analysis for the behavioral sciences. New York: Academic.

_____. 1988. Statistical power analysis for the behavioral sciences. 2nd ed. Hillsdale, NJ: Earlbaum.

Dufrene, B. A., G. H. Noell, D. N. Gilbertson, and G. J. Duhon. 2005. Monitoring implementation of reciprocal peer tutoring: Identifying and intervening with students who do not maintain accurate implementation. School Psychology Review 34 (1): 74-86.

Gayler, K., N. Chudowsky, M. Hamilton, N. Kober, and M. Yeager. 2004. State high school exit exams: A maturing reform. Washington, DC: Center on Educational Policy.

Glass, G. V. 1976. Primary, secondary, and meta-analysis of research. Educational Researcher 5 (10): 3-8.

Glass, G. V., B. McGaw, and M. L. Smith. 1981. Meta-analysis in social research. Beverly Hills, CA: Sage.

Horowitz, J. 2006. Sustaining successful school reform: An interview with Jordan Horowitz. Curriculum Review 45 (5): 14.

Milou, E., and C. F. Bohlin. 2003. High school exit exams across the nation. News Bulletin (September). Reston, VA: National Council of Teachers of Mathematics.

No Child Left Behind Act (NCLB). 2001. http://www.ed.gov/ legislation/ESEA02/107-110.pdf (accessed February 27, 2007).

Paige, R. 2003. Paige marks 18-month anniversary of No Child Left Behind with update to Congress. http://www.ed.gov/news/ pressreleases/2003/07/07082003a.html (accessed September 5, 2007).

Renaissance Learning Systems. 1998a. Accelerated Math computer program. Wisconsin Rapids, WI: Renaissance Learning Systems.

_____. 1998b. Math renaissance: Teachers’ handbook. Madison, WI: School Renaissance Institute.

Ridgway, J., and S. McCusker. 2003. Using computers to assess new educational goals. Assessment in Education 10 (3): 309-28.

Thompson, B. 2006. Foundations of behavioral statistics. New York: Guilford.

U.S. Department of Education, Policy and Program Studies Service (DOE). 2004. State education indicators with a focus on title I: 2000-01. Washington, DC: U.S. Department of Education, Policy and Program.

Ysseldyke, J., R. Spicuzza, S. Kosciolek, and C. Boys. 2003. Effects of a learning information system on mathematics achievement and classroom structure. The Journal of Educational Research 96 (3): 163-73.

Ysseldyke, J., and S. P. Tardrew. 2003. Differentiating math instruction: A large-scale study of accelerated math: Final report. Wisconsin Rapids, WI: Renaissance Learning.

Ysseldyke, J., S. P. Tardrew, J. Betts, T. Thill, and E. Hannigan. 2004. Use of an instructional management system to enhance math instruction of gifted and talented students. Journal for the Education of the Gifted 27:293-310.

Robert Springer, PhD, is now a volunteer for the Retired and Senior Volunteer Program (RSVP) in northern Arizona in education, SaddleBrooke Community Outreach, Tucson. David Pugalee, PhD, is a professor in the Department of Middle and Secondary Education at the University of North Carolina at Charlotte. Bob Algozzine, PhD, is a professor in the Department of Educational Leadership at the University of North Carolina at Charlotte. Copyright (c) 2007 Heldref Publications

Copyright Heldref Publications Sep/Oct 2007

(c) 2007 Clearing House, The. Provided by ProQuest Information and Learning. All rights Reserved.