Quantcast

Paraeducator-Supplemented Instruction in Structural Analysis With Text Reading Practice for Second and Third Graders at Risk for Reading Problems

December 13, 2006

By Vadasy, Patricia F; Sanders, Elizabeth A; Peyton, Julia A

ABSTRACT

Two studies-one quasi-experimental and one randomized experiment- were designed to evaluate the effectiveness of supplemental instruction in structural analysis and oral reading practice for second- and third-grade students with below-average word reading skills. Individual instruction was provided by trained paraeducators in single- and multiletter phoneme-grapheme correspondences; structural analysis of inflected, affixed, and multisyllable words; exception word reading; and scaffolded oral reading practice. Both studies revealed short-term word level and fluency effects.

CHILDREN’S BEGINNING WORD READING SUCcess and difficulties both strongly reflect their phonological awareness (Adams, 1990; Bradley & Bryant, 1983; Liberman, 1982; Perfetti, 1985). But whereas poor phonological skills hold back the beginning reader (and speller), the “deep” alphabetic English writing system soon requires young readers and writers to become sensitive to orthographic and morpheme levels of language. Accurate word reading in less transparent languages such as English (Goswami, Gombert, & de Barrera, 1998) soon calls for strategies to map sounds onto larger units, such as syllables and rimes. For example, reading and spelling words like peach, friction, and hiding require knowledge of multiletter spelling patterns like ea and ch that are highly reliable in their spelling-sound correspondence (Venezky, 1999) as well as connections with the spellings and pronunciations of syllables like tion and ing. Ziegler and Goswami (2005) have described these challenges as consistency and granularity problems. The consistency problem characterizes languages like English in which a single phoneme may have multiple spellings and some orthographic units may have multiple pronunciations. The granularity problem describes the inconsistency of smaller “grain sizes” (i.e., phonemes) that requires children to develop reading strategies at the sublexical levels (e.g., rimes). The inconsistency of English orthography may throw discouraging obstacles in the path of students with reading problems. One instructional practice that addresses consistency and granularity problems is structural analysis of constituent word parts. We use the term structural analysis here to encompass the division of written words into parts that can be recognized as subunits, including affixes, roots, and syllables (Chall, 1967), as well as the recognition of multiletter spelling patterns (e.g., ea, ight).

Structural analysis develops awareness of the morphological layer of language, which becomes more important by middle school but begins to develop in the early grades. In average readers of Grades 3 to 6, these morphological skills begin to surpass phonological skills in their contribution to reading (Singson, Mahony, & Mann, 2000), as children learn to coordinate levels of language (Green et al., 2003; Mahony, Singson, & Mann, 2000; Nagy, Berninger, Abbott, Vaughan, & Vermeulen, 2003). As children encounter more complex written and oral language, they build semantic and syntactic knowledge that they later use to infer meanings of morphologically complex words (Tyler & Nagy, 1989). Correlational studies lend further support to the role of morphological awareness in the reading skills of young children, contributing significant variance in reading outcomes for first graders (Carlisle & Nomanbhoy, 1993), second graders (Nagy et al., 2003), and third graders (Mahony et al., 2000; Shankweiler et al., 1995).

Morphological reading skills develop gradually, beginning with knowledge of inflections in primary grades and adding extensive knowledge of derived words by third grade (Anglin, 1993). In fact, 60% of the new words that middle school students encounter in reading are derived forms (Nagy & Anderson, 1984). One model of how children process complex words (Carlisle & Fleming, 2003; Schreuder & Baayan, 1995) suggests that children develop initial knowledge of morphemes by learning to decompose words into smaller parts that recur in words they encounter in reading. Experience with written and oral language later co-activates semantic and syntactic information when these word parts are encountered. Of interest in our studies is the early phase of this process, when children learn to recognize the subunits of words.

EXPLICIT INSTRUCTION IN STRUCTURAL ANALYSIS

Fowler and Liberman (1995) summarized the psycholinguistic evidence that skilled readers process morphemically complex words in an analytic fashion, suggesting that less skilled readers would benefit from explicit instruction in orthographic and morpheme language layers. However, the results of structural analysis interventions, including instruction in syllabication skills, have been mixed. Rule-based syllable segmenting strategies, for example, have not been found effective (Canney & Schreiner, 1976-1977; P. Cunningham, Cunningham, & Rystrom, 1981), whereas more flexible strategies for reading multisyllable words have been more successful (Archer, Gleason, Vachon, & Hollenbeck, 2001; Henry, 1989, 1993; Shefelbine, 1990; Vachon & Gleason, 2001). For example, Bhattacharya and Ehri (2004) taught adolescents with poor reading skills to apply a flexible syllable segmentation strategy to decode multisyllable words and to match the spoken syllables to spellings, using words with similar syllabic constituents. The effect size for syllable segmenting on a decoding transfer task was d = 1.19.

Training in subword structures appears to benefit the reading and spelling performance of younger students as well. Metacognitive strategy training in structural analysis was used by Lovett and Steinbach (1997) for small-group remedial instruction of students with reading disabilities in Grades 2 through 6. Students made similar gains across grade levels, with significant improvement in word identification, word attack, and transfer word tests.

THE ROLE OF READING PRACTICE

Reading experience, sometimes measured through print exposure (A. E. Cunningham & Stanovich, 1998), becomes an increasingly important variable as multiple layers of word knowledge develop. Although the phonological connection is critical for initial reading skills, reading practice boosts early phonological skills and permits the development of high-quality lexical representation in word identification and spelling (Perfetti, 1992). Limited exposure to print slows the growth of subword lexical connections and word- specific representations. In later reading development, print exposure provides a similar boost through successful recognition and spelling attempts of more complex inflected and derived word forms. Print exposure, due to its role in vocabulary and knowledge development (A. E. Cunningham & Stanovich, 1991), may play an even more important role for students who are less sensitive to morphological relationships in complex words. These students may also lack semantic and syntactic skills that facilitate the independent acquisition of word pronunciation and meaning, and they often require scaffolded oral reading experience to practice complex word identification and to supplement their limited vocabulary (Perfetti, 2003). Unfortunately, early texts used for silent reading practice may not be designed to build word and domain knowledge (Chall & Jacobs, 2003; Hiebert, 2005). Chard and Kame’enui (2000) documented low levels of oral reading in first-grade classrooms for students with poor reading skills. In both of the interventions we describe here, structural analysis instruction is combined with oral reading practice in carefully selected texts to provide scaffolded practice reading larger and more complex words.

SUPPLEMENTAL INSTRUCTION BY PARAEDUCATORS

Most supplemental reading interventions reported in the literature have been implemented by teachers (Elbaum, Vaughn, Hughes, & Moody, 2000). Although it would be desirable if there were adequate teacher time to supplement instruction for students with poor reading skills, the economics of schools often make it more feasible for paraeducators to deliver this instruction. Limited research has made it difficult to determine how effective paraeducators are in this role, in part because these nonteacher implementers are not always identified consistently. For example, in our interventions, the term parent tutor or college student tutor often describes someone with the same qualifications and training as a paraeducator. Elbaum et al.’s (2000) meta-analysis reported that when nonteachers were used to supplement reading instruction, the largest effect sizes (d = 1.65, based on three studies) were found for college student tutors. We recently reported (Vadasy, Sanders, & Peyton, 2005) on a supplemental, code-oriented kindergarten intervention implemented by paraeducators in which an effect size of 1.06 in decoding was obtained. Limited insights into the effectiveness of paraeducators may also be obtained from classroom- based early reading interventions that have been implemented at least in part by paraeducators. Simmons, Kame’enui, Stoolmiller, Coyne, and Harn (2003) used educational assistants to implement a small-group, \highly explicit early reading intervention for kindergarten students in the bottom quartile in alphabetic and phonological skills. The effect size for nonsense word fluency was 0.92. Blachman, Ball, Black, and Tangel (1994), and Torgesen, Wagner, Rashotte, Rose, et al. (1999) used both teachers and teaching assistants to deliver an intense, explicit early reading intervention for at-risk kindergartners. Decoding effect sizes obtained in these studies were 1.01 and 0.57, respectively. These preliminary findings suggest that we might better use paraeducators to supplement reading instruction for children with reading problems.

In this article, we describe two studies in which paraeducators supplemented instruction in structural analysis for second- and third-grade students with reading problems. Study 1 provided a field test of the intervention, using a quasiexperimental, nonequivalent- groups design. The revised intervention was subsequently tested in a randomized experiment (Study 2). The research questions we addressed were,

1. Does explicit instruction in structural analysis improve word reading and spelling for students with poor reading skills?

2. Does oral reading practice in texts that feature complex words further benefit the fluency and comprehension skills of these students?

3. Can structural analysis instruction be implemented effectively by paraeducator tutors?

The theoretical background for this research includes connectionist models (Adams, 1990; Seidenberg, 1989) of word recognition through distributed representations of spelling, sound, meaning, orthography, and morphological structure. The studies also address reciprocal relations between phoneme awareness and reading (Ehri, 1985; Liberman, Liberman, Mattingly, & Shankweiler, 1980) and between reading and spelling (Ehri, 1997). As Bryant, Nunes, and Bindman (2000) suggested, a similar reciprocity is likely among morphological knowledge, reading, and spelling skills. These morphological interconnections may support more complex lexical analysis skills, including those reflected in spelling. In particular, we focus on the role of reading practice after children attain complex phoneme awareness and a high quality of word representation (Juel, 1994; Stahl & Murray, 1998). Reading practice offers authentic opportunities to develop knowledge of the syntactic and semantic role of morphemes. Students who are unable to independently access texts with morphologically complex words may avoid reading (Stanovich & Cunningham, 1992). One strategy to afford these students exposure to these language structures is to scaffold reading practice in texts with complex vocabulary. Paraeducators are well suited to scaffold oral reading practice in the grade-level texts that these readers find difficult to access.

Finally, these studies extend our previous research (Vadasy, Jenkins, & Pool, 2000; Vadasy, Sanders, Jenkins, & Peyton, 2002) into the efficacy of paraeducators to supplement classroom reading instruction for students with poor reading skills. Although we have ample evidence that one-toone supplemental tutoring programs in reading are highly effective (Elbaum et al., 2000), the effectiveness and potential of paraeducators have not been directly addressed. More recently, the requirements for paraeducators with instructional duties in programs supported by Title I, Part A funds, as set forth in the No Child Left Behind Act (2001) have been designed to prepare these staff to assist in reading instruction. However, there is limited information on the training and materials they will need to be effective in these roles.

STUDY 1

METHOD

Participants

Students. Second graders were recruited from 12 urban, demographically similar schools in a large northwestern school district. Of the schools participating, 6 were treatment sites, and 6 were control sites. During the first month of school, 19 second- grade teachers referred for screening students judged to be at risk for reading difficulties. Criteria for study participation included (a) parent consent for study participation, (b) nonretention in first or second grade, (c) no prior tutoring experience, and (d) a pretest reading accuracy composite standard score at or below 95 (37th percentile) on a composite pretest score comprising the standard scores on the Reading subtest of the Wide Range Achievement TestRevised (WRAT-R; Jastak & Wilkinson, 1984) and the Word Attack and Word Identification subtests of the Woodcock Reading Mastery Test-Revised/Normative Update (WRMTR/NU; Woodcock, 1998).

Forty-six students met study eligibility criteria. Students at treatment sites were assigned to tutoring based on school schedules, and students at control sites received no tutoring. Study attrition was 13%, due to 5 of the 46 students (11%) moving from the school (one treatment, 4 controls) and one (2%) treatment student who was uncooperative during tutoring sessions. Thus, 40 students completed all phases of the study. To compare equivalent groups, 3 control students were removed due to English language learner (ELL) status (there were no ELL students in the treatment group at the end of the study), and 6 (3 control, 3 treatment) were removed due to outlier scores on pretest measures (two or more standard deviations from the grand mean on pretest raw scores). These students’ scores are described separately in the results section.

A final sample of 31 students (n = 12 treatment, n = 19 control) were included in the analyses. Treatment students were 67% male, 33% minority, and 67% Title I eligible; controls were 47% male, 47% minority, and 47% Title I eligible. Chi-square analyses on demographic characteristics were not significant for any demographic or school variable (all ps > .05).

Tutors. Eligible students at treatment sites were assigned to tutors based on classroom and tutor schedules. Tutors were recruited from the school communities and were hired and paid as hourly employees by their respective schools. Tutors were women; many were mothers of older students in the school building, and most were college graduates. Researchers provided 3 hours of initial training to introduce and model instructional procedures and to supervise tutor practice on each lesson component. Initial training also presented explicit correction procedures and scaffolding suggestions. Ongoing coaching and follow-up training were provided throughout the year during weekly visits to treatment sites. Typically, each tutor received an added 60-90 min of individual, on- site training.

Intervention

Students received 30 min of individual tutoring, 4 days a week, for 20 weeks, with interruptions for scheduled school vacations and holidays. Each session included 15 min of instruction in word-level skills and structural analysis and 15 min of oral reading practice. Students progressed through lessons at their own pace (i.e., some students completed more than one lesson per session), with instruction averaging M = 42.2 hours (SD = 7.98).

Tutors were provided with scripted materials on which all instruction in word-level skills was based. During the first 10 weeks of instruction, tutors reviewed letter-sound correspondences, including word reading and spelling with featured correspondences. Added practice in the alphabetic principle was provided in a procedure described by Berninger (1998). Students practiced identifying the sounds of highfrequency, multiletter spelling units matched with a keyword and picture. Two-letter spelling units included consonant blends, digraphs, and silent letters. The student pointed to each spelling unit, said the name of the letters, the pictured word, and the corresponding sound (e.g., “kn, knot, /n/”), and tutors gradually faded the use of the picture. Tutors also used the letter-sound card to scaffold word and text reading and spelling.

The first group of lesson materials reviewed reading and spelling of single-syllable words that featured multiletter spelling patterns that had been introduced and practiced. If needed, tutors reviewed and helped students practice a phoneme blending strategy. Students practiced reading and spelling aloud the letter patterns in isolation (e.g., sh, ea) and in lists of words and nonwords including the target letter combinations. For all words read in word lists, and for corrections in text reading, tutors directed students to analyze the word as follows: “Find the letter pair, say the sound, then read the word” (e.g., /br/, brush, /wh/, wheat). For the spelling dictation, tutors chose several words that were difficult for students to read. To develop automatic recognition of sight words, students practiced reading from lists of highfrequency words (Fry, 1997). Tutors worked through these lists to identify words that students could not read automatically. Students then practiced those words by reading, spelling, and rereading each word. Students practiced each sight word until they could read it instantly, without hesitation, three times in a row. Spelling was integrated with these sight word reading drills to help students form complete word representations. Tutors dictated three difficult sight words for the student to write. Students reread all written words.

In the second half of the intervention, tutors worked with more heavily scripted lessons to help students practice reading and spelling of inflected, affixed, and multisyllable words. Ten lessons offered reading and spelling practice with inflected words (-s, – ed, -ing, and -y endings). Students were introduced to allomorphs of plurals (/s/, /z/, and /ez/) and inflections (for the past tense, / t/, /d/, and /ed/). As these inflections often require spelling changes when added to base words, tutors presented and reviewed (with examples) simple rules for spelling inflected words, although students were not expected to memorize the rules (e.g., e-drop, consonant doubling).

A set of 30 lessons introduced andreviewed reading and spelling of words with common affixes (e.g., dis-, mis-, re-, pro-, -ly). Tutors modeled how to chunk multisyllable words into syllables and dictated words for students to practice chunking orally. This oral syllable segmenting was integrated with practice reading and spelling lists of multisyllable words. The syllable chunking strategy was flexible; students were encouraged to notice the vowels, find the syllables, read them, and put the parts together. Affixes were chosen from high-frequency affix lists (Archer et al., 2001; Carnine, SiIbert, & Kame’enui, 1990). Tutors first modeled reading and spelling the affixes in isolation. Then students read the affixed words by finding and reading the affix, removing it, reading the root word, and then putting all the parts together. Students spelled affixed words and practiced segmenting multisyllable affixed words into syllables. About half of the affixes were prefixes, and half were suffixes. Half of the suffixes were neutral (i.e., did not change the spelling or pronunciation of the stems to which they were added), half were nonneutral.

During the last 15 min of each tutoring session, students read orally from grade-level passages and trade books. Readings were selected carefully to provide opportunities for students to practice phonological, orthographic, and morphological linkages taught in the lessons. Tutors were instructed to choose the reading method that best suited the student’s skill level: independent reading (student can read text with 90% or greater accuracy), partner reading (student accuracy between 80% and 90%), or echo reading (student accuracy below 80%). For independent reading, the student read orally, and the tutor provided corrections and assistance on difficult or incorrectly read words. For partner reading, both tutor and student read aloud together, again with the tutor correcting and assisting for difficult words. For echo reading, the tutor first read a sentence, and then the student reread the sentence. Most students were able to read independently, with tutor assistance on difficult words. Tutors were trained to provide immediate corrective feedback on all errors (Grossen & Carnine, 1993; Pany & McCoy, 1988). Feedback included, “What’s the first sound?”"Find the part you know,” and “Check the letter card to find the sound.” If a student struggled for more than 3 seconds, the tutor supplied the word and had the student repeat the word and reread the sentence. Tutors also provided assistance and feedback to help students apply their knowledge of recently introduced spelling patterns and syllable chunking to correct blockages and miscues.

Fidelity

Researchers conducted on-site tutor observations weekly, using a checklist of critical student and tutor behaviors. Activities were rated on a dichotomous scale, and the highest possible rating was 100%. Tutors were also rated on management and use of tutoring time. For all criteria, the researchers scored only those lesson parts that they observed during a visit. Treatment fidelity on instructional components ranged from 88% to 100% (M = 96.3%, SD = 5.18%). Tutor scores on management and time use ranged from 81% to 100% (M = 97.7%, SD = 5.69%). The high average fidelity and limited variance in tutor quality limited its use as a predictor.

Assessments

Students were pretested in the fall prior to tutoring and posttested after tutoring in the spring.

Pretests. Receptive language was measured with the Peabody Picture Vocabulary Test-Third Edition, Form A (PPVT-IIIA; Dunn & Dunn, 1997), which requires students to select a picture that best illustrates the meaning of an orally presented stimulus word. Testing is discontinued after the student misses 8 out of 12 items within a set. Test-retest reliability is .93 for 6- to 10-year- olds.

Three measures of word-level reading accuracy were used (and averaged to form a composite word-level accuracy score), including the WRAT-R Reading subtest and the Word Identification and Word Attack subtests of the WRMT-R/NU. The WRAT-R Reading subtest measures letter knowledge (naming 13 uppercase letters and identifying the first 2 letters in the student’s name) and word reading skills. Testing is discontinued after 10 consecutive missed items. Internal consistency reliability is .96 for 7- to 8-year- olds. The WRMTR/NU Word Identification subtest requires the student to read increasingly difficult words. Testing is discontinued after six consecutive items are missed. Split-half reliability averages .99 for first and third graders. The WRMT-R/NU Word Attack subtest requires the student to read a list of pseudowords that increase in difficulty, until six consecutive items are missed. Split-half reliability averages .96 for third graders.

Two measures of word-level reading efficiency were used (and averaged to form a composite word-level efficiency score), namely, the Phonemic Decoding and Sight Word Efficiency subtests of the Test of Word Reading Efficiency (TOWRE; Torgesen, Wagner, & Rashotte, 1999). The TOWRE Phonemic Decoding subtest requires the reading of as many nonwords as possible in 45 seconds from a list that increases from 2-phoneme nonwords to 10-phoneme nonwords. Testretest reliability for 6- to 9-year-olds is .90. The TOWRE Sight Word subtest requires the reading of as many words as possible in 45 seconds from a list that gradually increases in difficulty. Test- retest reliability for 6- to 9-year-olds is .96.

Reading comprehension was assessed using the WRMTR/NU Passage Comprehension subtest. Students are asked to restore a word that is missing from a series of sentences and short passages. Testing is discontinued after six incorrect responses. Split-half reliability averages .97 for first and third graders.

Finally, spelling was assessed with the WRAT-R Spellng subtest, which requires students to copy marks, write their names, and spell dictated words. Testing is discontinued after 10 consecutive missed items. The raw score used to compute the standard score typically includes copied marks and name spelling in addition to words correctly spelled; internal consistency reliability for 7- to 8- year-olds is .93. However, similar to Fuchs et al. (2001), we used a raw and standard score based only on the number of words correctly spelled (out of 40 possible).

Posttests. Posttests included all of the aforementioned measures except the PPVT-IIIA. In addition to these measures, context reading skills were measured at posttest with a group of three grade-level passages from the Invitations to Literacy (Houghton Mifflin, 1999) program: “With My Brother” (Grade level 1), “Trip to Shay Lake” (Grade level 2), and “To Catch a Thief” (Grade level 2-3). Fluency rate (i.e., words correctly read per minute) was recorded for each passage and averaged to create a composite reading fluency score.

Analyses

SPSS 13.0 for Windows (SPSS, 1989-2004) was used to compute all descriptive and inferential statistics. For posttest analyses of variance (ANOVAs), effect sizes were computed as the difference between treatment and control group means divided by the pooled estimate of the standard deviation (square root of the mean square error term); similarly, for posttest analyses of covariance (ANCOVAs), effect sizes were computed as the difference between adjusted group means divided by the pooled estimate (Cohen, 1988).

RESULTS

Pretests

Intercorrelations among pretest scores ranged from r = .38 (receptive language and spelling) to r = .89 (reading accuracy and efficiency). One-way ANOVAs revealed no significant differences between treatment and control groups on any pretest measure (all Fs .10; see Table 1), including receptive language (treatment M = 100.8, SD = 14.53; control M = 98.5, SD = 15.13), F(1, 29) = 0.176, p > .05.

Posttests

Intercorrelations among posttest scores ranged from r = .55 (reading efficiency and spelling) to r = .91 (reading accuracy and comprehension). When pretest data were available, we conducted ANCOVAs for each construct of interest to examine postintervention group differences in reading accuracy, reading efficiency, reading comprehension, and spelling. Lacking a pretest fluency rate score, we conducted an ANOVA for reading fluency. The results from these analyses (see Table 1 ) showed that the intervention group significantly outperformed the control group on reading efficiency, F(1, 28) = 5.474, p

The group means provided in Table 1 show that the treatment group’s average standard score performance was approximately at the 35th percentile in reading accuracy, 29th percentile in reading efficiency, 42nd percentile in reading comprehension, and 13th percentile in spelling; comparatively, controls averaged at the 24th, 17th, 30th, and 6th percentiles. Tutored students’ average reading rate at posttest was 74 words correct per minute (wcpm), which is considered “at some risk” according to benchmark indicators of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS; Good & Kaminski, 2002); this corresponds to about the 35th percentile according to updated fluency norms, whereas the control group’s average performance of 53 wcpm corresponds to about the 20th percentile (Behavioral Research and Training, 2005).

Students Removed From Analysis

As reported in the method section, we removed nine students from the analysis to avoid distortion of the results. Three of these were controls who were identified as ELL (there were no students with ELL status in the treatment group), and six (t\hree treatment students and three controls) were significantly younger or older than the sample mean age (which was also reflected in their raw scores on several measures). Here we summarize briefly the pattern of effects for these students.

The three control students who did not receive tutoring but who did receive special instruction because of their ELL status (pretest receptive language score averaged 76, sixth percentile) made remarkable gains from pretest to posttest in the areas of reading efficiency (gain of 6 points, posttest score of 78, seventh percentile) and reading comprehension (11point gain, posttest score of 86, 16th percentile). Little progress was made in the areas of reading accuracy (gain of 1 point, posttest score of 80, 10th percentile) or spelling (1-point gain, posttest score of 73, fourth percentile), and their average posttest reading fluency performance was 41 wcpm (15th percentile).

The three control students removed from analysis due to their significantly younger ages (pretest receptive language averaged 100, 50th percentile) fared less well, showing no gain in reading accuracy (pretest and posttest score of 91, 28th percentile), a 1- point loss in reading efficiency (posttest score of 88, 21st percentile) and in spelling (posttest score of 87, 19th percentile), and a 1-point gain in reading comprehension (posttest score of 93, 32nd percentile). Average posttest reading fluency was 45 wcpm (approximately 15th percentile).

The three treatment students removed from analysis due to their significantly older ages (receptive language score averaged 104, 60th percentile) received substantial benefits from tutoring. On average, these students gained 9 points in reading accuracy (posttest score of 106, 66th percentile), 14 points in reading efficiency (posttest score of 98, 44th percentile), 9 points in reading comprehension (posttest score of 105, 63rd percentile), and 17 points in spelling (posttest score of 98, 44th percentile). Furthermore, their reading fluency rate was in alignment with the treatment group average, at 67 wcpm (approximately 30th percentile).

STUDY 2

METHOD

Participants

Students. During the first month of school, 28 secondand third- grade teachers from five schools referred students whom they regarded to be at risk for reading difficulties. Students were screened to meet four criteria similar to Study 1, including (a) informed parent consent, (b) nonretention, (c) no previous tutoring experience, and (d) a pretest reading accuracy composite standard score performance between 80 (10th percentile) and 95 (37th percentile). The pretest reading accuracy composite was again the average of the standard scores from the WRAT-R Reading subtest and the WRMTR/NU Word Identification and Word Attack subtests. In contrast with Study 1, for Study 2 we established a minimum standard score of 80 to ensure that all students entered with basic decoding skills. Students with a standard score below 80 on the word-level reading accuracy composite were provided with a more basic decoding program but were not followed after pretest.

Sixteen second- and 19 third-grade students who met the screening criteria were randomly assigned, within school, to tutoring (treatment) or no tutoring (control) groups. Attrition included 6 (20%) students-3 who moved from the school (one treatment, two controls), 2 treatment group students who received an alternative intervention (based on parent and school requests), and one control student whom the school assigned to treatment midyear. Moreover, 3 (10%) students were removed from the analysis due to extreme pretest outlier scores (more than 2 SD from the sample grand mean in one of the following: age in years, one treatment student; reading efficiency, one control student; and reading fluency, one treatment student). This small group of students is described separately in the results section. The final sample sizes were n = 11 for treatment and n = 10 for control groups, which included 6 second graders and 15 third graders. In the treatment group, 3 (27%) students were tutored during the classroom reading block, 5 (45%) were tutored during non-reading activities, and 3 (27%) were tutored during varied classroom activities. In the treatment group, 64% were boys, 46% were minority students, 36% were Title I eligible, and 27% were English language learners (ELL), compared to 30% boys, 60% minority, 50% Title I eligible, and 20% ELL for controls. Using chi- square analyses, we found no significant differences between groups on any demographic or school variable (all ps > .05).

Tutors. Tutors included six individuals who were trained and provided coaching using procedures similar to those in Study 1. Four tutors had bachelor’s degrees, one had an associate degree, and one had completed high school. All were women, and most were parents of older students in the school where they tutored. One tutor was a paraeducator with other instructional duties, and three tutors had previous tutoring experience.

Intervention

Instruction was similar to that provided in Study 1. Because both studies reported here were part of multiyear research to field-test supplemental paraeducator instruction for second and third graders with poor reading skills, we incorporated observations and tutor feedback from the phonics instruction delivered in Study 1 to make the following changes for Study 2: In the first 12 lessons, students reviewed reading and spelling words and nonwords that included two- letter combinations (consonant blends, digraphs, and vowel teams). The second group of 34 lessons provided practice reading and spelling of multisyllable words, including practice in vowel flexing words with a schwa vowel (i.e., adjusting phonological recoding to arrive at the correct pronunciation) and practice with affixed words. The third group of 20 lessons covered inflected words, with reading and spelling practice much like instruction in Study 1. Furthermore, students practiced reading and spelling high-frequency exception words for 3 to 4 minutes each session. For oral reading practice, students read short one- or two-paragraph passages written by the researchers to provide immediate explicit practice in taught word features. Students also read orally in leveled nonfiction trade books selected for word choice and text quality. At the end of the year, tutored students had completed M = 72.3 (SD = 9.69) sessions, with instruction averaging 36 hours.

Fidelity

Weekly on-site observations of tutor instructional behaviors (rated from 0 = never does this activity to 4 = always does this activity) showed that treatment fidelity on lesson components was acceptable, M = 3.73 (SD = 0.22), and tutor management (rated on a dichotomous scale) averaged 97.8% (SD = 1.75%). Once again, restricted variance in tutor fidelity limited its usefulness as a predictor of student outcomes.

Assessments

The same measures were used as in Study 1, with the three reading fluency passages used in Study 1 added as pretest measures for Study 2, and one added measure of student classroom behavior and classroom instruction. In FebruaryMarch of the intervention year, classroom teachers completed the Multigrade Inventory for Teachers (MIT; Shaywitz, 1987), designed to measure student behaviors in the classroom as well as teacher instructional approaches. Teachers rated students individually on more than 50 items of activity, attention, adaptability, social, language, and academic skills. Six scales composed a subset of 28 items: Academic, Language, Dexterity, Attention, Activity, and Behavior. Scores from items with reliable time-factor loadings (as reported by Agronin, Holahan, Shaywitz, & Shaywitz, 1992, pp. 98-99) were averaged together, with each item score ranging from 0 to 5. Items that loaded negatively were reverse scored to range from 5 to 0. Except for the Academic scale, higher scores indicated worse performance.

RESULTS

Student Behaviors

Analyses of variance (ANOVAs) of teacher MIT ratings of student school behaviors revealed no significant differences between groups (all Fs .05). Treatment group ratings ranged from M = 0.8 (behavior; higher score indicating worse performance) to M = 2.4 (academic; higher score indicating better performance) and, similarly, control group ratings ranged from M = 1.0 (behavior) to M = 2.6 (academic).

Classroom Instruction

Chi-square analyses also revealed no significant differences between groups in teachers’ descriptions of their classroom reading instruction or in their use of curricula. Most teachers in both groups reported a balanced reading approach (91% of treatment and 89% of controls received instruction that emphasized both whole language and phonics), and most teachers (73% of treatment students, 56% of controls) used Houghton Mifflin, one of the two district- adopted literaturebased reading programs. The other district program, Pegasus, was used by 9% of treatment students and 11% of controls.

Pretests

Correlations among pretests ranged from r = -.45 (receptive language and reading fluency) to r = .74 (reading accuracy and reading efficiency). Groups were equivalent on all pretests (see Table 2) except reading efficiency: Controls had significantly greater skill in word-level reading speed (M = 88.8, SD = 2.69) than treatment students (M = 83.8, SD = 4.68), F(1, 19) = 8.687, p .05.

Posttests

Correlations among posttests ranged from r = -.30 (reading fluency and spelling) to r = .67 (reading efficiency and reading fluency). Despite the treatment group’s reading efficiency disadvantage at pretest, results from the ANCOVAs showed that tutored students significantly outperformed controls on reading accuracy and reading flu\ency at posttest (ps

The average standard scores Table 2 show that the treatment group’s average performance was in the 37th percentile on reading accuracy, the 31st percentile on reading efficiency, the 35th percentile on reading comprehension, and the 13th percentile on spelling; comparatively, students who did not receive tutoring averaged in the 30th, 20th, 32nd, and 15th percentiles on the same tests. Notably, the average performance for treatment students in Study 1 and 2 was quite similar; however, Study 2 control students’ average performance was markedly higher than that observed in Study 1, suggesting that classroom instruction may have changed in the year between studies.

Outliers

The results from small sample analyses can be greatly distorted by extreme outlier scores. As discussed in the method section, we chose to remove three students from the analysis because they had scores more than two standard deviations from the sample grand mean. However, we describe these students’ performance in more detail here.

One treatment student had been removed from statistical analysis due to his significantly older age (9.8 years old at pretest); this student, however, made substantial progress over the course of the year. By posttest, he had gained 1 point in reading accuracy (standard score of 88, 21st percentile), 3 points in reading efficiency (standard score of 81, 10th percentile), and 6 points in spelling; he lost 1 point in reading comprehension (standard score of 88, 21st percentile) but gained nearly 30 wcpm in reading fluency (posttest wcpm = 72).

Another treatment student had been removed due to her significantly low pretest reading fluency rate (pretest wcpm = 10); she too made progress over the course of the year. By posttest, she had gained 10 points in reading accuracy (standard score of 99, 47th percentile), 6 points in reading comprehension (standard score of 98, 44th percentile), and 5 points in spelling (standard score of 83, 13th percentile).

Finally, one control student was removed from the analysis due to her significantly high pretest reading efficiency score (standard score of 100, 50th percentile). By posttest, she lost 2 points in reading accuracy (standard score of 90, 25th percentile), 10 points in reading efficiency (standard score of 90, 25th percentile), and 7 points in spelling (standard score of 77, sixth percentile). However, this student did gain 4 points in reading comprehension (standard score of 96, 40th percentile) and 36 wcpm in reading fluency (123 wcpm at posttest).

DISCUSSION

The results from these two studies add qualified support to previously reported findings (Lovett & Steinbach, 1997; Nunes, Bryant, & Olsson, 2002) for supplemental instruction in structural analysis combined with oral reading practice for students with reading problems in Grades 2 and 3. Across studies, individual tutoring provided by trained paraeducators resulted in significantly higher reading accuracy or efficiency and fluency skills compared to classroom controls. In Study 1, tutored students also significantly outperformed controls at posttest in spelling and comprehension. Neither group of tutored students, however, attained end-of-grade benchmarks for fluency (according to DIBELS ORF benchmarks), although the tutored students in Study 1 read approximately 20 more words per minute than controls at posttest, and the tutored students in Study 2 gained 15 words per minute more than controls by posttest. Lack of a fluency pretest in Study 1 prevented us from comparing fluency gains in both groups. Although tutored students in Study 2 significantly outperformed controls on fluency, their average posttest rate of 87.5 wcpm placed them in the “at some risk” category on DIBELS fluency benchmarks at the end of both Grades 2 and 3 (about 23 wcpm below the end of third grade level DIBELS ORF benchmark of 110 wcpm) and at about the 30th percentile according to updated fluency norms (Behavioral Research and Training, 2005).

Effect sizes averaged d = 0.84 in Study 1 and d = 0.57 in Study 2. We compared these effect sizes with those reported in the National Reading Panel’s (NRP) meta-analysis of systematic phonics instruction (Ehri, Nunes, Stahl, & Willows, 2001). Those unweighted effect sizes (mean g) reported for instruction that targeted students after Grade 1 (i.e., Grades 2-6) averaged 0.30. Effect sizes for instruction of low-achieving second to sixth graders averaged 0.16. Effect sizes for phonics instruction, which taught students to analyze and blend larger subunits of words, averaged 0.70. Most of the instruction in the NRP summary was delivered by teachers in large or small groups.

The first question we addressed was whether explicit supplemental instruction in structural analysis benefited the reading and spelling skills of second- and third-grade students with poor reading skills. In both studies, treatment students derived word- level and fluency benefits and, in Study 1, spelling and comprehension benefits. One possible explanation for these differences is that the students in Study 1 received an additional 6 hours of instruction. Although spelling instruction was integrated into both interventions, spelling effects were observed only in Study 1 (based on words correctly spelled only). This may be due to: instructional differences between the studies; to our removal of the rule-based spelling instruction, because tutors reported it to be difficult to use in Study 1; or to more effective spelling instruction provided to the controls in Study 2. An alternate explanation for the lack of spelling effects in Study 2 is developmental changes in the spelling abilities of older Study 2 students. If morphological accuracy gradually surpasses orthographic knowledge in its importance for spelling accuracy (Green et al., 2003), it may be that the third graders in Study 2 did not gain enough in morphological skills for the change to be reflected in the standardized reading posttest.

The second question these studies addressed was whether oral reading practice enhanced fluency. In both studies, treatment students outperformed controls in reading rate. Treatment students in Study 2 gained an average of 2.16 words per week in fluency. Not surprisingly, 15 min of oral reading practice per session was not sufficient to move tutored students beyond the 30th percentile in fluency, where they began and ended instruction. The persistence of fluency deficits after intensive word reading instruction has been well documented (Torgesen, Rashotte, & Alexander, 2001). Our findings do raise the question how much closer to grade level students might have advanced with 30 or 40 min of oral reading practice scaffolded by an adult-a not insurmountable intervention to provide with adult tutors.

A general question addressed in these studies was whether paraeducators can effectively deliver instruction in structural analysis. To adequately address this question would have required a comparison group taught by teachers instead of paraeducators. Nonetheless, within the constraints of our study design, tutor fidelity data indicated that paraeducators implemented instruction in both studies to a high level of accuracy. These findings leave unanswered the question whether similar instruction by certified teachers would have been more effective. To evaluate the effectiveness of the interventions provided by the paraeducator tutors in these studies, we compared the failure rates to those reported in Torgesen’s (2000) review of five large-scale reading interventions delivered primarily by specially trained teachers (intensity of interventions ranged from 35 to 340 hours). Failure rates in those five studies averaged 5%, compared to 6% in our two studies.

Finally, the effect sizes in these interventions implemented by paraeducators should be considered in light of Hanushek, Kain, and Rivkin’s (1998) estimate for the effectiveness of specially designed instruction (most frequently provided by teachers), which they found to increase average reading scores by 0.04 SD over those attained in general education instruction.

Limitations

The findings of Study 1 should be considered in light of its quasi-experimental design. The findings in both studies are limited by the use of untreated control groups, the lack of a comparison group instructed by teachers, small sample sizes, and the lack of near-transfer reading and spelling measures of morphological skills, although we know of no widely used measures to permit comparison of outcomes across studies.

The interventions used by the tutors in these studies had broad effects on word-level and fluency skills. Effects in both studies were observed at both word and sentence level, suggesting consolidation of reading skills. Like others (Ehri, et al., 2001), we found that our phonics-based intervention had a limited impact on comprehension in Study 2, though its effect was larger in Study 1. Like Chard and Kame’enui (2000), who found the strongest effect sizes in oral reading fluency for phonemes, words, and sentences when students received frequent oral reading practice at these levels (letter-sound, word-word part, sentence-phrase), the tutored students in both studies received practice at all three levels as part of their instruction, and the effect sizes were robust in both reading efficiency and text reading fluency.

Implications for Practice

The difficulties that students experience in learning to read longer and more complex words are predicted by English orthographic inconsistency (Ziegler & Goswami, 2005). Students with poor reading skills are the most common casualties of reading instruction that does not make explicit the o\rthographic consistencies that characterize English spellings. The findings presented here indicate that paraeducators can effectively supplement classroom reading instruction for second- and third-grade students who do not yet perform at grade level in word reading skills. Between 10% and 15% of the students in these two studies were already identified for special education services. As Vaughn and Fuchs (2003) described, one component of a response-to-instruction model for identifying students with reading disabilities would be a general education system of highly effective core and supplemental instruction for students at risk for reading problems. Validated models for supplemental tutoring represent one type of research-based intervention that is often overlooked in discussions of a response- to-treatment approach to reading disabilities. The instruction delivered by trained paraeducators in these studies represents a standard treatment protocol that is feasible for many schools to adopt and for which fidelity of implementation is attainable (Fuchs, Mock, Morgan, & Young, 2003). Nonresponsive students might then be identified for added or differentiated instruction in accuracy or fluency skills. For example, the fluency gains observed with 15 min of scaffolded oral reading practice suggest that paraeducators might extend this practice to move students closer to grade-level fluency. Other students with continued poor reading accuracy might then be referred for more individualized instruction in structural analysis skills by more skilled teachers and specialists.

REFERENCES

Adams, M. J. (1990). Beginning to read: Thinking and learning about print. Cambridge, MA: MIT Press.

Agronin, M. E., Holahan, J. M., Shaywitz, B. A., & Shaywitz, S. E. (1992). The Multi-Grade Inventory for Teachers (MIT): Scale development, reliability, and validity of an instrument to assess children with attention deficits and learning disabilities. In S. E. Shaywitz & B. A. Shaywitz (Eds.), Attention deficit disorder comes of age: Toward the twenty-first century (pp. 98-116). Austin, TX: PRO-ED.

Anglin, J. M. (1993). Vocabulary development: A morphological analysis. Monographs of the Society for Research in Child Development, 58, (Serial No. 238).

Archer, A. L., Gleason, M. M., Vachon, V., & Hollenbeck, K. (2001). Instructional strategies for teaching struggling fourth and fifth grade students to read long words. Unpublished manuscript. This unpublished study is summarized on http:// store.cambiumlearning.com/resources/ research/pdf/ sw_research_rewards_rb01.pdf

Behavioral Research and Training. (2005). Oral reading fluency: 90 years of measurement (Technical Report No. 33). Eugene: University of Oregon, College of Education.

Berninger, V. W. (1998). Process Assessment of the Learner (PAL): Guides for intervention and PAL intervention kit. San Antonio, TX: Psychological Corp.

Bhattacharya, A., & Ehri, L. C. (2004). Graphosyllabic analysis helps adolescent struggling readers read and spell words. Journal of Learning Disabilities, 37, 331-348.

Blachman, B., Ball, E. W., Black, R. S., & Tangel, D. M. (1994). Kindergarten teachers develop phoneme awareness in low-income inner- city children. Reading and Writing: An Interdisciplinary Journal, 6, 1-18.

Bradley, L., & Bryant, P. E. (1983). Categorizing sounds and learning to read-a causal connection. Nature, 301, 419-421Bryant, P., Nunes, T., & Bindman, M. (2000). The relations between children’s linguistic awareness and spelling: The case of the apostrophe. Reading and Writing: An Interdisciplinary Journal, 12, 253-276.

Canney, G., & Schreiner, R. (1976-1977). A study of the effectiveness of selected syllabication rules and phonogram patterns for word attack. Reading Research Quarterly, 12, 102-124.

Carlisle, J. F., & Fleming, J. (2003). Lexical processing of morphologically complex words in the later elementary years. Scientific Studies of Reading, 7, 239-254.

Carlisle, J. F., & Nomanbhoy, D. (1993). Phonological and morphological awareness in first graders. Applied Psycholinguistics, 14, 177-195.

Camine, D., Silbert, J., & Kame’enui, E. J. (1990). Direct instruction reading (2nd ed.). Englewood Cliffs, NJ: Merrill.

Chall, J. S. ( 1967). Learning to read: The great debate. New York: McGraw-Hill.

Chall, J. S., & Jacobs, V. A. (2003). The classic study on poor children’s fourth-grade slump. American Educator, 27, 14-22, 28-29.

Chard, D. J., & Kame’enui, E. J. (2000). Struggling first-grade readers: The frequency and progress of their reading. The Journal of Special Education, 34, 28-38.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, NJ: Erlbaum.

Cunningham, A. E., & Stanovich, K. E. (1991). Tracking the unique effects of print exposure in children: Associations with vocabulary, general knowledge, and spelling. Journal of Educational Psychology, 83, 264-274.

Cunningham, A. E., & Stanovich, K. E. (1998). The impact of print exposure on word recognition. In J. L. Metsala & L. C. Ehri (Eds.), Word recognition in beginning literacy (pp. 235-262). Mahwah, NJ: Erlbaum.

Cunningham, P., Cunningham, J., & Rystrom, R. (1981). A new syllabication strategy and reading achievement. Reading World, 20, 208-214.

Dunn, L. M., & Dunn, L. M. (1997). Peabody picture vocabulary test (3rd ed.). Circle Pines, MN: American Guidance Service.

Ehri, L. C. (1985). Effects of printed language acquisition on speech. In D. Olson, N. Torrance, & A. Hildyard (Eds.), Literacy, language, and learning (pp. 333-367). Cambridge, UK: Cambridge University Press.

Ehri, L. C. (1997). Learning to read and learning to spell are one and the same, almost. In C. A. Perfetti, L. Rieben, & M. Fayol (Eds.), Learning to spell: Research, theory, and practice across languages (pp. 237-270). Mahwah, NJ: Erlbaum.

Ehri, L. C., Nunes, S., Stahl, S., & Willows, D. (2001). Systematic phonics instruction helps students learn to read: Evidence from the National Reading Panel’s meta-analysis. Review of Educational Research, 71, 393-447.

Elbaum, B., Vaughn, S., Hughes, M. T., & Moody, S. W. (2000). How effective are one-to-one tutoring programs in reading for elementary students at risk for reading failure? A meta-analysis of the intervention research. Journal of Educational Psychology, 92, 605- 619.

Fowler, A., & Liberman, I. (1995). The role of phonology and orthography in morphological awareness. In L. Feldman (Ed.), Morphological aspects of language processing (pp. 157-188). Hillsdale, NJ: Erlbaum.

Fry, E. (1997). 1000 instant words: The most common words for teaching reading, writing, and spelling. Lincolnwood, IL: Contemporary Books.

Fuchs, D., Fuchs, L. S., Thompson, A., Al Otaiba, S., Yen, L., Yang, N., Braun, M., & O’Connor, R. E. (2001). Is reading important in reading-readiness programs? A randomized field trial with teachers as program implementers. Journal of Educational Psychology, 93, 251-267.

Fuchs, D., Mock, D., Morgan, P. L., & Young, C. L. (2003). Responsiveness to intervention: Definitions, evidence, and implications for the learning disabilities construct. Learning Disabilities Research & Practice, 18, 157-171.

Good, R. H., & Kaminski, R. A. (2002). Dynamic indicators of basic early literacy skills (6th ed.). Eugene, OR: Institute for Development of Educational Achievement.

Goswami, U., Gombert, J. E., & de Barrera, L. F. (1998). Children’s orthographic representations and linguistic transparency: Nonsense word reading in English, French, and Spanish. Applied Psycholinguistics, 19, 19-52.

Green, L., McCutchen, D., Schwiebert, C., Quintan, T., Eva-Wood, A., & Juelis, J. (2003). Morphological development in children’s writing. Journal of Educational Psychology, 95, 752-761.

Grossen, B., & Carnine, D. (1993). Phonics instruction: Comparing research and practice. Teaching Exceptional Children, 25, 22-25.

Hanushek, E. A., Kain, J. F., & Rivkin, S. G. (1998). Does special education raise academic achievement for students with disabilities? (Working Paper No. 6690). Cambridge, MA: National Bureau of Economic Research.

Henry, M. K. (1989). Beyond phonics: Integrated decoding and spelling instruction based on word origin and structure. Annals of Dyslexia, 38, 258-275.

Henry, M. K. (1993). Morphological structure: Latin and Greek roots and affixes as upper grade code strategies. Reading and Writing: An Interdisciplinary Journal, 5, 227-241.

Hiebert, E. H. (2005). The effects of text difficulty on second graders’ fluency development. Reading Psychology, 26, 183-209.

Houghton Mifflin. (1999). Invitations to literacy. Boston: Author.

Jastak, S., & Wilkinson, G. S. (1984). Wide range achievement test-Revised. Wilmington, DE: Jastak.

Liberman, I. Y. (1982). A language-oriented view of reading and its disabilities. In H. Mykelburst (Ed.), Progress in learning disabilities (Vol. 5, pp. 81-101). New York: Grime and Stratton.

Liberman, I. Y., Liberman, A. M., Mattingly, I. G., & Shankweiler, D. (1980). Orthography and the beginning reader. In J. Kavanaugh & R. Venezky (Eds.), Orthography, reading, and dyslexia (pp. 137-154). Baltimore: University Park Press.

Lovett, M. W., & Steinbach, K. A. ( 1997). The effectiveness of remedial programs for reading disabled children of different ages: Does the benefit decrease for older children? Learning Disability Quarterly, 20, 189-210.

Mahony, D., Singson, M., & Mann, V. (2000). Reading ability and sensitivity to morphological relations. Reading and Writing: An Interdisciplinary Journal, 12, 191-218.

Nagy, W., & Anderson, R. (1984). The number of words in printed school English. Reading Research Quarterly, 19, 304-330.

Nagy, W., Berninger, V., Abbott, R., Vaughan, K., & Vermeulen, K. (2003). Relationships of morphology and other language skills to literacy skills in at-risk second-grade readers and at-risk fourth- grade writers. Journal of Educational Psychology, 95, 730-742.

No Child Left Behind Act of 2001, (2002), 20 U.S.C. 631\0 et seq.

Nunes, T., Bryant, P., & Olsson, J. (2002). Learning morphological and phonological spelling rules: An intervention study. Scientific Studies of Reading, 7, 289-307.

Pany, D., & McCoy, K. M. (1988). Effects of corrective feedback on word accuracy and reading comprehension of readers with learning disabilities. Journal of Learning Disabilities, 21, 545-550.

Perfetti, C. A. (1985). Reading ability. New York: Oxford University Press.

Perfetti, C. A. (1992). The representation problem in reading acquisition. In P. B. Gough, L. C. Ehri, & R. Treiman (Eds.), Reading acquisition (pp. 145-174). Hillsdale, NJ: Erlbaum.

Perfetti, C. A. (2003). The universal grammar of reading. Scientific Studies of Reading, 7, 3-24.

Schreuder, R., & Baayan, R. H. (1995). Modeling morphological processing. In L. B. Feldman (Ed.), Morphological aspects of language processing (pp. 131-154). Hillsdale, NJ: Erlbaum.

Seidenberg, M. S. (1989). A distributed, developmental model of visual word recognition and naming. Psychological Review, 96, 523- 568.

Shankweiler, D., Crain, S., Katz, L., Fowler, A. E., Liberman, A. D., Brady, S. A., et al. (1995). Cognitive profiles of reading- disabled children: Comparisons of language skills in phonology, morphology, and syntax. Psychological Science, 6, 149-156.

Shaywitz, S. E. (1987). Multigrade inventory for teachers. New Haven, CT: Yale University School of Medicine.

Shefelbine, J. (1990). A syllabic-unit approach to teaching decoding of poly-syllabic words to fourth- and sixth-grade disabled readers. In J. Zutell & S. McCormick (Eds.), Literacy theory and research: Analysis from multiple paradigms (pp. 223-230). Chicago: National Reading Conference.

Simmons, D. C., Kame’enui, E. J., Stoolmiller, M., Coyne, M., & Harn, B. (2003). Accelerating growth and maintaining proficiency: A two-year intervention study of kindergarten and first-grade children at risk for reading difficulties. In B. R. Foorman (Ed.), Preventing and remediating reading difficulties: Bringing science to scale (pp. 197-228). Baltimore: York Press.

Singson, M., Mahony, D., & Mann, V. (2000). The relation between reading ability and morphological skills: Evidence from derivational suffixes. Reading and Writing: An Interdisciplinary Journal, 12, 219- 252.

SPSS. (1989-2004). SPSS 13.0 for Windows. [Software]. Chicago, IL: Author.

Stahl, S., & Murray, B. (1998). Issues involved in defining phonological awareness and its relation to early reading. In J. L. Metsala & L. C. Ehri (Eds.), Word recognition in beginning literacy (pp. 65-88). Mahwah, NJ: Erlbaum.

Stanovich, K. E., & Cunningham, A. E. (1992). Studying the consequences of literacy within a literate society: The cognitive correlates of print exposure. Memory of Cognition. 20, 51-68.

Torgesen, J. K. (2000). Individual differences in response to early interventions in reading: The lingering problem of treatment resisters. Learning Disabilities Research & Practice. 15, 55-64.

Torgesen, J. K., Rashotte, C. A., & Alexander, A. W. (2001). Principles of fluency instruction in reading: Relationships with established empirical outcomes. In M. Wolf (Ed.), Dyslexia, fluency, and the brain (pp. 334-355). Timonium. MD: York Press.

Torgesen, J. K., Wagner, R. K., & Rashotte, C. A. (1999). Test of word reading efficiency. Austin, TX: PRO-ED.

Torgesen, J. K., Wagner, R. K., Rashotte, C. A., Rose, E., Lindamood, P., Conway, T., et al. (1999). Preventing reading failure in young children with phonological processing disabilities: Group and individual responses to instruction. Journal of Educational Psychology, 91, 579-593.

Tyler, A., & Nagy, W. (1989). The acquisition of English derivational morphology. Journal of Memory and Language, 28, 649- 667.

Vachon, V. L., & Gleason, M. M. (2001). The effects of mastery teaching and varying practice contexts on middle school students’ acquisition of multisyllabic word reading strategies. Unpublished manuscript. Retrieved on August 11, 2006 from http:// store.cambiumlearning.com/resources% 5cresearch%5cpdf%5cmiddle_school_study1.pdf

Vadasy, P. F., Jenkins, J. R., & Pool, K. (2000). Effects of tutoring in phonological and early reading skills on students at risk for reading disabilities. Journal of Learning Disabilities, 33, 579-590.

Vadasy, P. F., Sanders, E. A., Jenkins, J. R., & Peyton, J. A. (2002). Timing and intensity of tutoring: A closer look at the conditions for effective early literacy tutoring. Learning Disabilities Research & Practice, 17, 227-241.

Vadasy, P. F., Sanders, E. A., & Peyton, J. A. (2005). Relative effectiveness of reading practice or word-level instruction in supplemental tutoring: How text matters. Journal of Learning Disabilities, 38, 364-380.

Vaughn, S., & Fuchs, L. S. (2003). Redefining learning disabilities as inadequate response to instruction: The promise and potential problems. Learning Disabilities Research & Practice, 18, 137-146.

Venezky, R. L. (1999). The American way of spelling: The structure and origins of American English orthography. New York: Guilford Press.

Woodcock, R. (1998). Woodcock reading mastery test-Revised: Normative update. Circle Pines, MN: American Guidance Service.

Ziegler, J. C., & Goswami, U. (2005). Reading acquisition, developmental dyslexia, and skilled reading across languages: A psycholinguistic grain size theory. Psychological Bulletin, 131, 3- 29.

PATRICIA F. VADASY, PhD, is senior researcher at Washington Research Institute. She currently conducts research in early reading instruction and supplemental interventions for students at risk for reading disabilities. ELIZABETH A. SANDERS, MEd, is research analyst at Washington Research Institute. She is a doctoral student in measurement, statistics, and research design in educational psychology at the University of Washington. JULIA A. PEYTON, PhD, is director of program res




comments powered by Disqus