Reading Progress Indicator FAQ
On this page:
This topic provides answers to some frequently asked questions about using the Reading Progress Indicator assessments. If you still have questions or concerns, or would like more information, see About Reading Progress Indicator (RPI) or contact Carnegie Learning Customer Support.
What is Reading Progress Indicator?
Reading Progress Indicator (RPI) is an online assessment that rapidly measures the effects of the Fast ForWord family of components by evaluating reading performance as students progress from component to component. Scientific Learning partnered with a third party to develop this research-based and psychometrically sound evaluation tool.
When using assessments to evaluate students, it's important to note that assessment scores only represent a partial snapshot of a student's performance at a given time, which can be influenced by the student's psychological or physical state at the time of testing. As such, assessment tools can be subject to errors. Scientific Learning recommends using both a pattern of test performance over time and multiple assessment tools to provide a more accurate measure of a student's true performance.
Additionally, please note that an assessment is considered to be at or below chance if the student scored below 30% on the assessment. Assessment scores below 30% correct cannot be distinguished from random guessing, and therefore are not a good basis for a valid evaluation. When reviewing RPI scores for your students, consider whether unexpectedly low scores seem reasonable for those students. If not, you may choose to void the RPI assessment and have the student complete a new assessment.
mySciLEARN provides detailed RPI scoring results in the Reading Progress Indicator reports. To learn more see Reports.
What reading skills does RPI measure?
RPI measures four skills: phonological awareness, decoding, vocabulary, and comprehension. These skills are defined as follows:
- Phonological awareness – The ability to discriminate, identify, and manipulate the sounds in words; for example, recognizing words that rhyme, or removing the first sound of cat to make the word at.
- Decoding – The ability to decipher written words. Decoding requires phonological awareness as well as an understanding of sound-symbol correspondences.
- Vocabulary – The ability to understand a word's meaning and syntactical role, and to recognize the individual morphemes in a word. Morphemes are the smallest unit of meaning in a language and include roots, prefixes, and suffixes.
- Comprehension – The ability to read text (or listen to spoken language) and actively construct meaning so as to understand the author’s message.
How does Reading Progress Indicator correlate to known reading assessments?
Numerous studies have demonstrated the positive correlation of Reading Progress Indicator (RPI) to nationally normed reading assessments and high-stakes reading tests from various states across the nation. Across 23 studies, including data from more than 13,000 students, correlation coefficients ranged from 0.48 (moderate) to 0.88 (strong), with an average of 0.69. Collectively, this body of research has established that RPI has a high level of concurrent validity as a reading assessment. Moderate to strong correlations have been found between RPI and the following assessments. To learn more see our whitepaper Overview of RPI Correlation Studies.
Standardized reading assessments
- AIMSweb – Curriculum Based Measure: Reading (CBM-R)
- AIMSweb – MAZE (MAZE)
- Dynamic Indicators of Basic Early Literacy Skills – Oral Reading Fluency (DIBELS-ORF)
- Developmental Reading Assessment (DRA)
- Gates-MacGinitie Reading Tests (GMRT)
- Group Reading Assessment and Diagnostic Evaluation (GRADE)
- NWEA: Measures of Academic Progress (MAP)
- Scholastic Reading Inventory (SRI)
- Renaissance Learning’s STAR (STAR)
- Woodcock-Johnson, Third Edition (WJ-III)
State assessments
- Arizona’s Instrument to Measure Standards (AIMS)
- Florida Comprehensive Assessment Test 2.0 (FCAT 2.0)
- Indiana Reading Evaluation and Determination-3 (IREAD-3)
- Indiana Statewide Testing for Educational Progress Plus (ISTEP+)
- Iowa Tests of Basic Skills (ITBS)
- Massachusetts Comprehensive Assessment System (MCAS)
- New York State Testing Program (NYSTP)
- Nevada Criterion Referenced Test (CRT)
- End-of-Grade (EOG; North Carolina)
- Ohio Achievement Assessments (OAA)
- Pennsylvania System of School Assessment (PSSA)
- South Dakota State Test of Educational Progress (DSTEP)
- State of Texas Assessments of Academic Readiness (STAAR)
While the above studies focused on reading assessments, another study found a moderate correlation (0.57) between RPI and overall performance on the LAS Links English language proficiency assessment.
Can RPI replace any clinical, district, or state assessments?
No. RPI is not designed to replace any other clinical, district, or state assessment.
Can RPI diagnose specific language or reading disabilities?
No. RPI is not designed to be a diagnostic tool.
What if the Individualized Education Program (IEP) calls for accommodations or modifications while taking tests?
If a student's IEP calls for specific modifications during testing, accommodations can be implemented for that student as necessary. The same modifications should be in place during each assessment so that RPI can effectively assess the benefits of the Fast ForWord components.
Are the RPI assessments available for iPad?
Yes, students can take assessments on iPad.
How frequently are RPI assessments presented?
Reading Progress Indicator presents assessments automatically, based on component usage. To learn more see How RPI administers assessments. Also, schools can choose to let instructors manually administer RPI assessments, as needed. See About Manual RPI.
What’s the difference between “Auto” and “Manual” RPI?
The term Auto RPI refers to the entire Reading Progress Indicator feature, which “automatically” administers assessments based on component usage. See About Auto RPI. This descriptor was added to the mySciLEARN software and the Reading Progress Indicator reports to help differentiate these assessments from manually assigned assessments.
Manual RPI is a feature within Reading Progress Indicator that lets instructors manually administer additional RPI assessments to students who are already using Reading Progress Indicator. See About Manual RPI to learn more.
As far as the actual assessments are concerned (the forms, questions, and scoring), there is no difference between Auto RPI and Manual RPI assessments.
What are RPI scores?
For each assessment, Reading Progress Indicator assesses national percentile scores and grade equivalent scores, along with the percent correct in each reading skill area. Gain scores are available for students who have taken at least one follow-up assessment. A student’s overall gain score reflects improvement from the initial assessment to the latest follow-up assessment.
The scores in Reading Progress Indicator are based on the results of a calibration study in which Reading Progress Indicator was administered to a large, nationally representative sample of students. This sample was selected to include students of different ethnicities, and students from all regions of the United States. Fast ForWord product use was not considered in the selection process. Normalized scores were developed based on the performance of the students in this study.
What are grade equivalent scores?
Grade equivalent scores provide a general idea of how a student is performing with reference to younger and older students who took the same test. For example, a very advanced third grader might earn a grade equivalent score of 5.3 on the level 2-3 RPI assessment. This score means that the student performed as well as an average fifth grader who took the same test three months into the school year. A third grader who earns a score of 5.3 is performing well above average on third grade level material (at the 83rd percentile, to be precise), yet this does not mean the student is ready for fifth grade level material.
The Reading Progress Indicator grade equivalents were developed by a professional psychometrician based on data from the RPI norming study.
What are national percentile scores?
National percentile scores allow you to compare one student's score to the scores of a large national sample. For example, if a student scored in the 70th percentile on an assessment, that student performed better than 70% of the students in his or her grade who took the same assessment as part of the calibration study. In contrast, percent correct scores indicate the proportion of questions that were answered correctly. For example, if a student scored 40% correct on a set of 10 questions, it means that the student got 4 correct answers.
Reading Progress Indicator provides a national percentile score for the student’s performance on the whole test, along with percent correct scores for the sets of questions on the four skill areas: phonological awareness, decoding, vocabulary, and comprehension. In addition, percentile scores for Reading Progress Indicator are based on the middle of the year and do not vary with the season. For example, for students in second grade, the 50th percentile corresponds approximately to a grade equivalent of 2.5. A second grader scoring at grade level earlier in the year will have a percentile score below 50, whereas a second grader scoring at grade level later the year will have a percentile score above 50.
Because normative data is not available for students beyond tenth grade, the national percentile scores for students in grades 11, 12, or 13+ are calculated based on tenth grade norms. For those who are familiar with percentiles, please note that percentiles in Reading Progress Indicator range from 1 to 99.
What are gain scores?
Gain scores reflect improvement from an earlier assessment to a later assessment. A student’s overall gain score shows improvement from the initial assessment to the latest follow-up assessment. Gain scores are reported in terms of grade equivalent scores and percentile scores. Grade equivalent scores are based on a ten month academic school year, so a student who earned an initial assessment score of 2.2 and a follow-up assessment score of 3.4 would have a gain score of 1.2 (that is, one year and two months).
What are reading proficiency levels?
Reading proficiency levels are categories of achievement that describe student performance in the skills measured by Reading Progress Indicator. There are four levels: struggling, emerging, established, and advanced. These proficiency levels were established by aligning results from the Reading Progress Indicator calibration study with information from various states regarding the percentage of students that achieve proficiency on high-stakes assessments. The levels are defined as follows. Occasionally the results are color coded.
- Struggling – Indicates minimal success with the fundamental skills assessed by Reading Progress Indicator (students at the 1st to 29th percentile).
- Emerging – Indicates a partial mastery of the skills assessed by Reading Progress Indicator (students at the 30th to 54th percentile).
- Proficient – Indicates a solid understanding of the skills assessed by Reading Progress Indicator (students at the 55th to 79th percentile).
- Advanced – Indicates a superior performance demonstrating excellent understanding of the skills assessed by Reading Progress Indicator (students at the 80th to 99th percentile).
Please note that percentiles in Reading Progress Indicator range from 1 to 99.
How are the average scores for a group, school, or district calculated?
Neither grade equivalent scores nor percentiles can be mathematically averaged. To report average grade equivalent gains, mySciLEARN averages the students' scaled scores on the initial assessment and the latest follow-up assessment, references the grade equivalent scores corresponding to those averages, and then uses those grade equivalent scores to calculate the average gain score. To report average percentile gains, mySciLEARN averages the students’ normal curve equivalent scores on the initial assessment and the latest follow-up assessment, references the percentile scores corresponding to those averages, and then uses those percentiles to calculate the average gain score.
Some of my students scored at grade level on the initial assessment. Should they use the Fast ForWord product?
The Fast ForWord product has been shown to benefit students who are performing at grade level as well as those who are performing above and below grade level.
One of my students did not show significant gains on the Reading Progress Indicator reports. Is the Fast ForWord product working?
A given test score is just a partial snapshot of a student's ability at a given time. Performance can be influenced by a student's physical or psychological state at the time of the test, and may not always reflect the student’s best effort. To determine whether a student is benefiting from Fast ForWord use, conduct progress monitoring using a variety of assessments and look for signs of progress in the student's classroom work and behavior.
My school's average score only improved from the 3rd percentile to the 6th percentile, but I see improvements in the classroom. What's going on?
Percentile rank is not an equal-interval scale; that is, the difference between any two scores is not the same as the difference between any other two scores. Scores become increasingly spread out as they move further from the 50th percentile in either direction, so that gains on the far ends of the spectrum are more difficult to achieve. For example, a student whose percentile score went from 2 to 12 has made a greater improvement in performance than a student whose score went from 50 to 60, even though both students had a gain score of 10 percentile points. Because their scores are at the extreme end of the spectrum, students whose percentile scores improve from 3 to 6 may indeed be making gains that translate into improved classroom performance.