I had the pleasure of attending the Fusion 2015 Conference in Chicago, IL on July 7th – 9th. The conference increased my understanding of the data that MAP testing yield. My most informative session was led by Dr. Nate Jensen, who is a research scientist for NWEA. The session was entitled, “RIT 101. Overall, I learned how students are assigned RIT scores. It changed my perspective of how my students performed on the assessment this past testing season. Here are some important points that I took away:

  • Students and test items are on the same scale, RIT scale.
    • Test items are given RIT score based on the Item Response Theory. NWEA embeds items on assessments that are called “test items.” These items do not count toward or against students’ scores. These items are only on an assessment to determine the RIT score they should be assigned. NWEA determines the average RIT score of students who answer the item correctly 50% of the time. For example, if students who have an average score of 230 answer a test item with 50% accuracy, then that test item is given a RIT score of 230. Scoring the item 230 tells teachers that students with a score of 230 have a 50% chance of getting this item correct. Students with a higher score are more likely to get this item correct and students with a lower score are less likely to get this item correct.
    • A student’s RIT score does not tell how many questions he answered correctly. Regardless of a student’s RIT score, they get about 50% of the questions correct. The RIT score tells the difficulty of the questions that a student can handle. Remember, students’ scores and test items are on the same scale. This lets teachers know what to students are ready to learn.
  • Score calculation is based on maximum likelihood estimation. This estimation attempts to answer the question, “What is the most likely RIT score a student who performed as he did on items he received?”
    • Students are scored on the Rasch Model,  (Rasch Unit = RIT Score), which assesses:
      • the difficulty of items that a student receives
      • how the students performed on those items
      • The score does not tell how many items a student answered correctly. Remember, everyone scores about 50% of the questions correctly.
      • A RIT score of 233 by itself is not interpretable. It only tells the difficulty level, at which students can get about 50% of the items correct, relative to the content they are being taught and what is being assessed on the test. 
    • Look at the Achievement Status and Growth Summary Report (ASG) to start interpreting data.
      • Growth Index = RIT Growth – Projected Growth
      • Don’t average growth index scores. This will lead to a misunderstanding of the data.
    • There will be new metrics for SY16.
      • Reports will be able to export to Excel now! (This is great!)
      • Comparison data will be implemented.
        • Conditional Growth Index
        • Conditional Growth Percentile
      • Growth Projection will account for the amount of instructional time between current and previous test (i.e. months/days between Fall and Winter test).
    • If you want your students’ scores to continue to increase, then use the Learning Continuum.
    • If you have students with a decline in RIT scores, then look at the following data:
      • Standard of Error – Should not be more than 3. Look at the RIT range to determine the SE. If the SE is greater than 3, then the student experienced “testing burnout.”
      • Test Duration – Should be 45 – 55 minutes. If students took less than 45, then that’s the problem; they clicked through the test.
      • Percent of Items Correct – Should be 50%. I’m not sure how teachers can get this data? I haven’t seen this on a report.

I hope this is useful for you as it is for me. Please share if you have any information related to this post.