Product/Subject Area
Type
State/Country
- Kentucky (9)
- Louisiana (9)
- Mississippi (9)
- Missouri (9)
- Oklahoma (8)
- South Carolina (8)
- Utah (8)
- Virginia (8)
- Wisconsin (8)
- Arizona (7)
- Arkansas (7)
- Colorado (7)
- Delaware (7)
- Idaho (7)
- Nebraska (7)
- New Mexico (7)
- Illinois (6)
- Michigan (6)
- Florida (3)
- Indiana (3)
- Minnesota (3)
- New York (3)
- Tennessee (3)
- Iowa (2)
- Kansas (1)
- Massachusetts (1)
- Montana (1)
- Nevada (1)
- Ohio (1)
- Oregon (1)
- South Dakota (1)
- Washington (1)
- Wyoming (1)
Grade Level
English Learner Performance on the 2021-22 ACCESS, Star Reading, and Star Math Assessments
From the abstract: "The ACCESS assessment is administered to English Learners (ELs) annually each winter to measure their progress toward English proficiency. Star Computer Adaptive Tests (CATs) are a suite of assessments administered 3-4 times during the school year to students in grades K-12 that measure students’ reading and math skills, monitor achievement and growth, and track how well students understand skills aligned to state and Common Core standards. Although some ELs are excused from participation in Star, most take Star Reading and Math. This report examines EL performance on the ACCESS and EL performance on Star within the context of ACCESS performance.."The Full Report is available online: <https://www.philasd.org/research/wp-content/uploads/sites/90/2023/08/English-Learner-Performance-2021-22-ACCESS-Star-Reading-and-Star-Math-August-2023.pdf>. The Full Addendum is available online<https://www.philasd.org/research/wp-content/uploads/sites/90/2023/08/Addendum-English-Learner-Performance-2021-22-ACCESS-Star-Reading-and-Star-Math-August-2023.pdf>.
An independent evaluation of the diagnostic accuracy of a computer adaptive test (Star Math) to predict proficiency on an end of year high-stakes assessment
From the abstract; "Star Math (SM) is a popular computer adaptive test (CAT) schools use to screen students for academic risk. Despite its popularity, few independent investigations of its diagnostic accuracy have been conducted. We evaluated the diagnostic accuracy of SM based upon vendor provided cut-scores (25th and 40th percentiles nationally) in predicting proficiency on an end of year state test in a sample of highly achieving grade three (n = 210), four (n = 217), and five (n = 242) students. Specificity exceeded sensitivity across all grades and cut-scores. Acceptable levels of sensitivity and specificity were achieved in grade three and four but not grade five when using the 40th percentile."Citation: Turner, M. I., Van Norman, E. R., & Hojnoski, R. L. (2022). An independent evaluation of the diagnostic accuracy of a computer adaptive test to predict proficiency on an end of year high-stakes assessment. Journal of Psychoeducational Assessment, 40(7), 911-916.
Accelerated Reader: Understanding Reliability and Validity
Accelerated Reader is a progress-monitoring system that provides feedback on the comprehension of books and other materials that students have read. It also tracks student reading over time. Currently, more than 180,000 different Accelerated Reader quizzes have been developed and are in use. This report provides reliability and validity data for Accelerated Reader quizzes. The reliability analyses use a large database of nearly 1 million quiz records. Validity is established through correlations with scores from 24 standardized reading tests and through a study that confirms that the quizzes are effective at discriminating between instances of students having read the book versus not having read the book. The report also includes descriptions of the purpose and intended classroom use of Accelerated Reader, descriptions of the types of quizzes, and the processes for quiz development. The report is available online: <https://docs.renaissance.com/R35806>.
Pathway to Proficiency: Linking the Star Reading and Star Math Scales with Performance Levels on Pennsylvania's System of School Assessment (PSSA)
To develop Pathway to Proficiency reports for Pennsylvania Star Reading Enterprise and Star Math Enterprise schools on the Renaissance Place hosted platform, we linked our scaled scores with the scaled scores from Pennsylvania's achievement test. This technical report details the statistical method behind the process of linking Pennsylvania's state test (PSSA) and Star Reading and Star Math scaled scores. Sample Pathway to Proficiency and related reports are also included. The full report is available online: <https://docs.renaissance.com/R53794>.
Comparing computer adaptive and curriculum-based measurement methods for monitoring mathematics
This study compared a computer adaptive assessment (Star Math) and a curriculum based measurement (AIMSweb) for progress monitoring in math. Star Math was found to be a significant positive relation to the outcome measure Pennsylvania System of School Assessment (PSSA) across all three grades (3rd, 4th, and 5th grade). Results suggest that Star Math is sensitive to students' mathematics growth and support the use of Star Math as a progress monitoring tool for mathematics.Citation: Shapiro, E. S., Dennis, M. S., & Fu, Q. (2015). Comparing computer adaptive and curriculum-based measures of math in progress monitoring. School Psychology Quarterly, 30(4), 470-487.
Sensitivity, Specificity, LR+, and LR-: What Are They and How Do You Compute Them?
This paper demonstrates how to calculate and utilize sensitivity, specificity, LR+, and LR- statistics with Star Reading to measure annual skill acquisition and predict performance on Pennsylvania System of School Assessment (PSSA). These calculations can be utilized when making data-informed decisions about students. Citation: Edman, E. W., & Runge, T. J. (2014, September). Sensitivity, specificity, LR+, and LR-: What are they and how do you compute them? Indiana, PA: Indiana University of Pennsylvania.The full report is available online: <https://www.iup.edu/psychology/files/school-psych-files/personnel/runge/sensitivity-specificity-lr-and-lr-v2.pdf>.
Comparing Computer-Adaptive and Curriculum-Based Measurement Methods of Assessment
This peer-reviewed journal article reported the concurrent, predictive, and diagnostic accuracy of a computer-adaptive test (Star Math) and curriculum-based measurements (CBM; both computation and concepts/application measures) for universal screening in mathematics among students in first through fourth grade. Correlational analyses indicated moderate to strong relationships over time for each measure, with correlations between Star Math and CBM measures across the three assessment periods low to moderate, with the strongest relationships between the Star Math and CBM concepts/application measure. Relationships to the state assessment for math for third- and fourth-graders was found to be stronger for the Star Math measure than for either the CBM computation or concepts/application measures, with the Star Math measure the only significant predictor of the state assessment. Diagnostic accuracy indices found all measures to produce acceptable levels of specificity but limited levels of sensitivity. The study offered one of the first direct comparisons of Star Math and CBM measures in screening for mathematics. Implications of using Star Math and CBM measures in conducting screening in elementary mathematics were discussed.Citation: Shapiro, E. S., & Gebhardt, S. N. (2012). Comparing computer-adaptive and curriculum-based measurement methods of assessment. School Psychology Review, 41(3), 295-305. The full article is available online: <https://search.proquest.com/docview/1197769261>.
Guided Independent Reading: An Examination of the Reading Practice Database and the Scientific Research Supporting Guided Independent Reading as Implemented in Reading Renaissance
DETAILS: Location: 24 U.S. states; Design: Analysis of Reading Practice Database; Sample: 50,823 students in grades 1-12 at 139 schools; Measure: Star Reading; Duration: 1 school year. RESULTS: This study of Accelerated Reader indicated that increased time spent reading leads to gains in reading achievement for all students regardless of prior ability, but only when the reading is highly successful. Regression analysis revealed that the single most important factor influencing both time spent reading and average percent correct is a student's teacher. Students in 2nd- through 8th-grade Renaissance Model- and Master-certified classrooms consistently outperformed students in non-certified classrooms and low-implementing classrooms. Email research@renaissance.com to request a copy of the Full Report. Information about a newly updated version of the report is available online: <http://research.renaissance.com/research/474.asp>.
A Cost Analysis of Early Literacy, Reading, and Mathematics Assessments: Star, AIMSweb, DIBELS, and TPRI
DETAILS: Location: AL, TX, OK, KS, NV, NC, OH, and PA; Design: Independent, assessment research; Sample: Staff from 12 schools in 8 states; Measures: Direct costs, opportunity costs. RESULTS: Christensen Associates conducted a study to determine the true costs associated with widely used early literacy, reading, and mathematics assessments: Star Early Literacy, Star Reading, Star Math, Dynamic Indicators of Basic Early Literacy Skills (DIBELS), Wireless Generation mCLASS DIBELS, AIMSweb, and the Texas Primary Reading Inventory (TPRI). The researchers interviewed staff from 12 schools in 8 states to calculate the average costs of using the tests. Two types of costs were measured: direct costs (the price of testing materials, licensing fees, and/or fees for access to scoring and reporting services), and opportunity costs (time to administer, score, and report results; time that could be spent on instruction if testing was not taking place). The results confirmed, both in terms of direct costs and opportunity costs, that Star Early Literacy, Star Reading, and Star Math, computer-adaptive assessments, are much more cost effective than DIBELS, other assessments; ranging from approximately one-half the cost of AIMSweb and about one-sixth the cost of paper TPRI. AUTHOR: Laurits R. Christensen Associates.Email research@renaissance.com to request a copy of this study or summary from the Renaissance Research Department.
Independent Reading: The Relationship of Challenge, Non-Fiction and Gender to Achievement
DETAILS: Location: 24 U.S. states; Design: Independent, correlational, peer-reviewed; Sample: 45,670 students in grades 1-12 at 139 schools; Measure: Star Reading; Duration: 1 school year. RESULTS: To explore whether different balances of fiction/nonfiction reading and challenge might help explain differences in reading achievement between genders, data on students who independently read more than 3 million books were analyzed. Moderate (rather than high or low) levels of challenge were positively associated with achievement gain, but nonfiction was generally more challenging than fiction. Nonfiction reading was negatively correlated with successful comprehension and reading achievement gain. Overall, boys appeared to read less than girls, but proportionately more nonfiction. In the upper grades, boys also had lower reading achievement than girls. Differences between classes in promoting successful comprehension of nonfiction were evident, suggesting intervention could improve achievement. Implications for research and practice were explored. PLEASE NOTE: Email research@renaissance.com to request a copy of this peer-reviewed journal article: Topping, K. J., Samuels, J., & Paul, T. (2008). Independent reading: The relationship of challenge, non-fiction and gender to achievement. British Educational Research Journal, 34(4), 505-524.
Computerized Assessment of Independent Reading: Effects of Implementation Quality on Achievement Gain
DETAILS: Location: 24 U.S. states; Design: Independent, correlational, peer-reviewed; Sample: 50,823 students in grades 1-12 at 139 schools; Measure: Star Reading; Duration: 1 school year. RESULTS: This study elaborated on the "what works?" question by exploring the effects of variability in program implementation quality on achievement. Particularly, the effects of computerized assessment in reading on achievement were investigated, analyzing data on students who read more than 3 million books. When minimum implementation quality criteria were met, the positive effect of computerized assessment was higher in the earlier grades and for lower achieving students. Implementation quality tended to decline at higher grade levels. With higher implementation quality, reading achievement gains were higher for students of all levels of achievement and across all grades, but especially in the upper grades. Very high gains and effect sizes were evident with very high implementation quality, particularly in grades 1-4. Implications for practice, the interpretation of research, and policy were noted. PLEASE NOTE: Email research@renaissance.com to rquest a copy of this peer-reviewed journal article: Topping, K. J., Samuels, J., & Paul, T. (2007). Computerized assessment of independent reading: Effects of implementation quality on achievement gain. School Effectiveness and School Improvement, 18(2), 191-208.
Does Practice Make Perfect? Independent Reading Quantity, Quality and Student Achievement
DETAILS: Location: 24 U.S. states; Design: Independent, correlational, peer-reviewed; Sample: 45,670 students in grades 1-12 at 139 schools Measure: Star Reading; Duration: 1 school year. RESULTS: Does reading practice make perfect? Or is reading achievement related to the quality of practice as well as the quantity? To answer these questions, data for students who read more than 3 million books were analyzed. Measures largely of quantity (engaged reading volume) and purely of quality (success in reading comprehension) showed a positive relationship with achievement gain at all levels of achievement. However, both high quantity and high quality in combination were necessary for high achievement gains, especially for older students. Both were weakly associated with student initial reading achievement, but more strongly associated with the class in which the student was enrolled, possibly suggesting the properties of teacher intervention in guiding independent reading were important. Implications for theory building, research, and practice were explored. PLEASE NOTE: Email research@renaissance.com to request a copy of this peer-reviewed journal article: Topping, K. J., Samuels, J., & Paul, T. (2007). Does practice make perfect? Independent reading quantity, quality and student achievement. Learning and Instruction, 17, 253-264.
Testing the Reading Renaissance Program Theory: A Multilevel Analysis of Student and Classroom Effects on Reading Achievement
DETAILS: Location: 24 U.S. states; Design: Independent, correlational; Sample: 50,823 students in grades 1-12 at 139 schools; Measure: Star Reading; Duration: 1 school year. RESULTS: This study is an independent evaluation of the data from Paul, 2003, available online: <http://research.renaissance.com/research/172.asp>. In the elementary grades, students in classrooms implementing Accelerated Reader with best practices showed statistically significant improvements in overall achievement level. In middle and high school, teachers who promoted a greater overall reading success rate were able to improve achievement results. Higher average percent correct on Accelerated Reader quizzes and reading at levels above the initial zone of proximal development (ZPD) were linked to greater outcomes. Additionally, even after using rigorous statistical controls for students' initial reading skill levels, reading success rate, and challenge of reading material, the amount of text read was a key predictor of later literacy development. AUTHORS: Geoffrey D. Borman, PhD and N. Maritza Dowling, PhD. The Summary of this study is available online: <https://docs.renaissance.com/R34537>. The Full Report is also available online: <https://docs.renaissance.com/R40524>.