Certificate test - A certificate test, like certification and licensure tests, is a criterion-referenced test (CRT). However, it is typically a low-stakes exam included as a component of a training program.
Certification program - This may refer to a few or to all components associated with awarding a certification. That is, it may refer to the certification examination or to the full set of activities related to awarding and maintaining the credential. These activities include: eligibility, examination, recertification, disciplinary, governance and policies.
Certification test - A certification test is typically a voluntary exam program designed as a criterion-referenced testing (CRT) that measures professional competence, and is sponsored by a non-governmental agency. This type of test may be targeted to measure entry level professional skills, specialty skills within the profession, or advanced skills in the profession.
Classical test theory - This is the traditional approach to measurement, which concentrates on the development of quality test forms. Classical test theory is used in item analysis, through statistics such as the p-value and point-biserial correlation. It is also used to assemble test forms according to statistical criteria, and to evaluate the quality of those forms through reliability and validity analyses.
Classification - Classification is used in testing as the process of categorizing, or classifying, examinees into two or more discrete groups, such as pass/fail or master/non-master. The classification of examinees, into categories of competence or non-competence, is the typical goal of a criterion-referenced testing program.
Classification error - This refers to the misclassification of examinees into the pass and fail categories when a passing score is applied. Classification errors can occur in both directions. That is, a truly competent examinee might fail the test, while an incompetent examinee might pass the test. A primary goal of well-designed exam programs is to minimize classification errors.
Code of ethics - This refers to the canons or professional standards that certificate holders must agree to uphold and abide by. It is frequently an agreed upon statement of principles and expected behavior and conduct of the certificate holders. Commonly referred to as Standards of Practice or Codes of Professional Conduct, the canons are subject to enforcement and certificate holders found in violation of the code of ethics may be subject to disciplinary procedures. Codes of ethics are made a requirement for application to or awarding of certification.
Cognitive level - The cognitive level of an item refers to the type of mental processing on the part of the examinee that the item is designed to target. While Bloom's Taxonomy specifies six cognitive levels (Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation), exam programs are more likely to target a subset of these, such as the first three, in their test blueprints.
Common-item nonequivalent groups design - This is a frequently used data collection design for equating purposes. In this design two or more test forms are assembled so that they include a subset of identical, or common, items. The use of common items across test forms provides information about differences in the abilities of nonequivalent examinee groups who may be administered the different test forms. This information is then used to statistically equate the forms, in order to make examinees' scores comparable.
Computer-based testing - This refers to the mode of test administration in which items are presented to an examinee on a computer screen. Examinees typically indicate their responses by clicking with a mouse. This is contrasted with the more traditional method, paper-and-pencil testing.
Conjectural methods - This class of item-based approaches to standard setting includes the most commonly used method, the modified Angoff method.
Concurrent validity - This refers to a statistical method for estimating the validity of a test that provides evidence about the extent to which the test classifies examinees correctly. Concurrent validity estimates the relationship between an examinee's known status as either a master or a non-master and that examinee's classification as a master or a non-master as a result of the test.
Content validity - The content validity of a test is estimated through a logical process in which subject matter experts (SMEs) review the test items in terms of their match to the content areas specified in the test blueprint. For certification and licensure tests, this is typically the most important type of validity.
Contrasting groups method - This is an examinee-based approach to standard setting in which a panel of judges is used to identify a subset of examinees who would definitively be considered non-masters and another set of examinees who would definitively be considered masters. The exam is administered to these two groups of examinees, and their two resulting test score frequency distributions are then plotted. The passing score is typically set at or near the intersection between these two distributions.
Credentialing program - This refers to both voluntary and mandatory programs, including those at the registration, certification, and licensure levels of regulation.
Criterion-referenced tests - Criterion-referenced tests (CRTs) are designed to classify examinees into two or more distinct groups by comparing their performance to an established standard of competence. Certification and licensure exam programs are typically CRTs, as opposed to norm-referenced tests (NRTs).