Page 18 of 41 Results
533 Results
Implementation Fidelity in Computerised Assessment of Book Reading

From the abstract: "Measuring the implementation fidelity (IF) or integrity of interventions is extremely important, since without it a positive or negative outcome cannot be interpreted. However, IF is actually measured relatively rarely. Direct and indirect methods of measurement have been used in the past, but tend to over-emphasize teacher behaviour. This paper focuses on student behaviour collated through computers - an interesting alternative. It deals with the reading of real books and reading achievement, for which variables a very large amount of computerised data was available on 852,295 students in 3243 schools. Reading achievement was measured pre-post with STAR Reading, a computerised itembanked adaptive norm-referenced test of reading comprehension. IF came from the Accelerated Reader (AR), which measures understanding of independent reading of real books the student has chosen by a quiz. Results showed higher IF was related to higher achievement. Neither IF nor reading achievement related to socio-economic status. Primary (elementary) schools had higher IF and achievement than secondary (high) schools. Females had higher IF and achievement than males. Students of higher reading ability implemented AR at a higher level, but did not gain in reading at a higher level. However, this computerised method of measuring IF with book reading showed limited reliability, no greater than methods emphasising teacher behaviour. In future, IF measures emphasising student response and those emphasising teacher behaviour need to be blended, although the latter will never generate the sample size of the former. This may be true of implementation fidelity in areas other than book reading."Citation: Topping, K. J. (2018). Implementation fidelity in computerised assessment of book reading. Computers & Education, 116, 176-190.

Applying Multidimensional Item Response Theory to Renaissance Assessments to Enhance Diagnostic Reporting Capacity

From the Executive Summary: "The goal of the Multidimensional Item Response Theory (MIRT) project for Renaissance is to increase the diagnostic reporting capacity for Renaissance assessments. The proposed methodology for meeting this goal was to evaluate the dimensionality of Renaissance Star items with respect to the learning progressions in math and reading. The learning progressions create testable hypotheses about the dimensional structure of the Star assessment items. This dimensionality was evaluated via MIRT models implied by the interrelated Skill Areas suggested by the learning progressions. Multidimensionality in the data was previously established by examining residual correlations from unidimensional Item Response Theory (IRT) calibrations of data. These residual correlations suggested that significant dimensionality was present in the data. After multidimensionality had been established, bi-factor MIRT models suggested by the Skill Area coding of items were fit to the data. These MIRT models showed significant improvement in model-data fit, thus validating the dimensionality hypotheses. Lastly, separate calibrations of items at different grade levels were linked to a common metric. This allows administration of items from across grade levels while still being able to produce scores on a constant scale. This report summarizes analyses for math and reading, gives insight into the meaning of Skill Areas, and demonstrates the added value of using MIRT for scaling."AUTHOR: William P. SkorupskiThe report is available online: <https://docs.renaissance.com/R61428>.

Going Deeper: The Role of Effective Practice in Encouraging Profound Learning

Going Deeper summarizes cognitive and educational research on the importance of practice in both reading and math, the role of practice in new college- and career-readiness standards, what effective practice looks like, and how educators and parents can get kids engaged. The full report is available online: <https://docs.renaissance.com/R54160>.

Special Report: Renaissance Mastery Model

The Renaissance mastery model automates the tracking and reporting of student data from a wide variety of sources, and converts that data into a unified measure of mastery, helping teachers make timely and informed decisions about all students' learning. This paper explains the research supporting this model, how mastery is calculated, as well as what the confidence indicator is and how it works. Paper is now updated to explain Multidimensional Item Response Theory (MIRT) and the Open Growth Score (OGS). The full report is available online: <https://docs.renaissance.com/R60385>.

Relating Results from Renaissance Star Reading and Renaissance Star Maths to the Key Stage 2 Standardised Attainment Tests (SATs)

This 2017 UK linking study of Star Reading and Star Maths outcomes against Key Stage 2 Standardised Attainment Tests (SATs) results.The report is available online: <https://docs.renaissance.com/R61342>.

What works and what fails? Evidence from seven popular literacy 'catch-up' schemes for the transition to secondary school in England

From the abstract: "There are concerns that too many young people, from disadvantaged backgrounds, are moving into secondary education in the UK, and elsewhere, without the necessary literacy skills to make progress with the wider secondary school curriculum. A large number of interventions have been proposed to reduce this poverty gradient. This paper summarises the evidence from randomised controlled trials of seven popular interventions, giving a different comparative perspective to individual reports, and permitting more detail than a wider review. Of these, it shows that Switch-on Reading (Reading Recovery) and Accelerated Reader, for example, are currently the most promising. And that summer schools and the use of generic literacy software are the least successful and may even harm pupil progress. The way in which the evidence is assessed in this paper suggests a way forward for practitioners and policy-makers navigating the evidence in their areas of interest. There is also evidence that practitioners should be able to conduct robust evaluations of their own with only minimal support, which could lead to a revolution in school improvement. The combined results suggest that 'soft' evaluations may be worse than just a waste of time and money, and that theoretical explanations might appear satisfying to readers but are largely unnecessary when assessing 'what works' in education."Purchasing information is available online: <http://dx.doi.org/10.1080/02671522.2016.1225811>.Gorard, S., Siddiqui, N., & See, B. H. (2017). What works and what fails? Evidence from seven popular literacy 'catch-up' schemes for the transition to secondary school in England. Research Papers in Education, 32(5), 626-648.

Academic Growth Expectations for Students with Emotional and Behavior Disorders

From the abstract: "Computer adaptive assessments were used to monitor the academic status and growth of students with emotional behavior disorders (EBD) in reading (N = 321) and math (N = 322) in a regional service center serving 56 school districts. A cohort sequential model was used to compare that performance to the status and growth of a national user base of more than 7,500,000 students without disabilities. Consistent with numerous previous findings, status or level of performance of students with EBD was consistently low relative to their nondisabled peers. However, for the most part the students with significant EBD demonstrated rates of growth similar to the nationwide sample of nondisabled peers. There was considerable variability in the academic growth of students across grades and between treatment programs, and this variability is described and discussed. Implications for policy and practice in student progress monitoring and teacher evaluation systems are discussed." Citation: Ysseldyke, J., Scerra, C., Stickney, E., Beckler, A., Dituri, J., & Ellis, K. (2017). Academic growth expectations for students with emotional and behavior disorders. Psychology in the Schools, 54(8), 792-807.

Pathway to Proficiency: Linking Star Reading and Star Math to the Maine Educational Assessment (MEA)

To develop Pathway to Proficiency reports for Maine Star Reading and Star Math Enterprise schools, we linked our scaled scores with the scaled scores from the Maine achievement test. This technical report details the statistical method behind the process of linking Maine Educational Assessment (MEA) and Star Reading and Star Math scaled scores. The full report is available online: <https://docs.renaissance.com/R58548>.

Identifying students at risk: An examination of computer-adaptive measures (Star Reading) and latent class growth analysis

From the abstract: "Multitiered systems of support depend on screening technology to identify students at risk. The purpose of this study was to examine the use of a computer-adaptive test and latent class growth analysis (LCGA) to identify students at risk in reading with focus on the use of this methodology to characterize student performance in screening. Participants included 3,699 students in Grades 3-5. Three time points of administration (fall,winter, and spring) of the computer-adaptive reading measure were selected. LCGA results indicated 6-7 classes, depending on grade, informed by level and growth in student performance that significantly predicted failure on the statewide test administered at the end of the year. The lowest-performing classes had failure rates above 90% across all grades. The results indicate that identifying homogeneous groups of learners through LCGA may be valuable as an approach to determining students who need additional instruction. Practical implications and future directions are discussed."Citation: Keller-Margulis, M., McQuillin, S. D., CastaƱeda, J. J., Ochs, S., & Jones, J. H. (2018). Identifying students at risk: An examination of computer-adaptive measures and latent class growth analysis. Journal of Applied School Psychology, 34(1), 18-35.

Sociocultural early literacy practices in the school and home context: The role of a digital library (myON)

The author explores how myON can help support literacy learning both at home and school and the factors that contribute to variations in the amount of use.Citation: O'Conner, W. L. (2017). Sociocultural early literacy practices in the school and home context: The role of a digital library (Unpublished doctoral dissertation). University of California, San Diego, CA.The full report is available online: <https://escholarship.org/uc/item/1j95c5k2>.

Developing psychometrically sound decision rules for Star Math

Developing psychometrically sound decision rules for Star Math. Manuscript submitted for publication.

Math Technology: Assessing the Educational Value of a Supplemental Practice Program

From the abstract: "The purpose of this action research project was to determine if the use of a supplemental computerized math practice program increases student achievement in the fractions domain for fifth graders. Two fifth grade math classes received the same classroom instruction. However, one class also utilized a computerized program three times per week for 20-30 minutes over a six-week period. Data was collected through quantitative pre- and post-assessments, quantitative survey questions, and qualitative student responses. Analysis of the data collected suggests that using the supplemental computerized math program, Freckle Math (formerly Front Row Education), in addition to regular classroom instruction, increases student achievement in the fractions domain at the fifth grade level."Citation: Tesch, S. (2017). Math technology: Assessing the education value of a supplemental practice program (Master's thesis). Northwestern College, Orange City, Iowa.The research study is available online: <https://nwcommons.nwciowa.edu/education_masters/19/>.

Progress Monitoring with Computer Adaptive Assessments: The Impact of Data Collection Schedule on Growth Estimates

From the abstract: "Although extensive research exists on the use of curriculum-based measures for progress monitoring, little is known about using computer adaptive tests (CATs) for progress-monitoring purposes. The purpose of this study was to evaluate the impact of the frequency of data collection on individual and group growth estimates using a CAT. Data were available for 278 fourth- and fifth-grade students. Growth estimates were obtained when five, three, and two data collections were available across 18 weeks. Data were analyzed by grade to evaluate any observed differences in growth. Further, root mean square error values were obtained to evaluate differences in individual student growth estimates across data collection schedules. Group-level estimates of growth did not differ across data collection schedules; however, growth estimates for individual students varied across the different schedules of data collection. Implications for using CATs to monitor student progress at the individual or group level are discussed." Citation: Nelson, P. M., Van Norman, E. R., Klingbeil, D. A., & Parker, D. C. (2017). Progress monitoring with computer adaptive assessments: The impact of data collection schedule on growth estimates. Psychology in the Schools, 54(5), 463-471.More information about this article is available online: <http://onlinelibrary.wiley.com/doi/10.1002/pits.22015/abstract>.