Archives for Corina Owens, Ph.D.
January 7, 2016 | Published by Corina Owens, Ph.D. |
|
As testing professionals, we are often communicating statistical information about items on examination forms to stakeholders involved in the examination process. This information includes a range of topics such as item difficulty and discrimination as well as simple multiple choice item option frequencies. While it is clear to those of us in the field of certification and licensure what these statistical properties mean, they can be less clear to those stakeholders who only encounter them during an exam committee meeting or even a board meeting. It is our responsibility as testing professionals to demystify these numbers and provide a meaningful discussion on the relevance and importance of the data examinations yield.
|
August 26, 2015 | Published by Corina Owens, Ph.D. |
|
Examinations can be split into two types: norm referenced and criterion referenced. Norm referenced examinations compare each examinee’s score to a normative sample. Criterion referenced exams compare each examinee’s score to an established standard. Let’s think of each of these types of exams in terms of running a race. Typically, when people come out and participate in a race they are being evaluated based on the performance of everyone else in the race. This process is similar to how examinees are evaluated on a norm referenced examination. Whoever crosses the finishes line first will be considered the winner of the race regardless of what time they cross the finish line.
|
July 1, 2015 | Published by Corina Owens, Ph.D. |
|
The Standards for Educational and Psychological Testing (APA, AERA, & NCME, 2014) (Standards) was recently revised and updated. As credentialing organizations begin to process what the revised version of the Standards means to them, it is important to point out most of the workplace testing and credentialing chapter remained the same. However, statements about the need for reliable subscore reporting are of vast importance, as is the design of examination score reports.
|
April 8, 2015 | Published by Corina Owens, Ph.D. |
|
A candidate’s observed score can be broken down into two components: their true score and an error component. The error component of observed scores can be further split into two types, random and systematic. Random errors of measurement affect candidates’ scores purely by chance, such as the room temperature where a candidate is testing, the candidate’s anxiety level or misreading a question. However, systematic errors of measurement are factors that consistently impact a candidate’s scores. For example, when measuring a candidate’s math skill level through word problems, the candidate’s reading level could have an impact on their scores. If the same test on math ability was administered over and over again to the same candidate under the same conditions this error would continue to be exhibited.
|
|
|
|