Certification and licensure exams are critical junctures for evaluating a candidate’s competency for a credential. The exam is often the final hurdle on a journey of education, experience, and preparation. Strictly from a measurement perspective, ideally we would be able to administer the number of questions that would allow a complete assessment of the domain(s) of interest. However, this is just not practical— factors like test fatigue, diminishing reliability, and fairness necessitate a subset of questions that is representative of the domain. Because of this, each test question becomes precious real estate, carefully crafted to measure relevant content accurately, avoid redundancy, and effectively differentiate candidates who are qualified to practice safely and competently from those who are not. Item writing and exam assembly are two critical components of the exam development process that accomplish this objective.
Item Writing
Item writing begins with defining the constructs to be measured. Subject matter experts (SMEs) selected to be representative of the certificant population and are trained on item writing best practices, develop items that are clear, fair, and directly aligned with the test’s objectives. Items should be evaluated on various criteria, for example:
- Relevance: Each item must directly assess the knowledge or skill it intends to without being influenced by extraneous factors.
- Clarity: Questions are crafted to be unambiguous, avoiding misleading cues that could confuse test-takers.
- Fairness: Ensuring no bias exists within the questions to disadvantage any group of candidates.
Innovative Item Types
Forced-response and constructed-response item formats can be used to capture a wide range of candidate abilities and skills. From multiple-choice questions to complex simulations, each item type is chosen based on its ability to best assess specific attributes while maintaining technical quality and alignment with the best practice psychometric standards. It is important to note that the construct to be measured should drive the item type that is used, rather than the opposite.
Process of Item Review
After initial item development, items undergo a rigorous review process involving multiple layers of scrutiny—from additional SMEs to test developers, psychometricians, and editors—each checking different aspects of the item such as content accuracy, psychometric alignment, clarity, bias, and cultural sensitivity. This comprehensive review ensures that every item meets our high standards of psychometric quality.
Form Assembly
The assembly of test forms is a strategic and systematic process that begins with a test blueprint. This blueprint dictates the distribution of content areas and ensures that each form of the test is balanced and reflective of the job’s requirements as identified through the job analysis process. To ensure that all test forms are comparable in terms of statistical properties, forms should be assembled, whenever possible, to be similar in statistical distributions (or test characteristic curves). Differences in form difficulties can be addressed through the process of equating to maintain the comparability of scores across different test administrations.
Pre-Testing and Quality Assurance
Pre-testing is the process of administering items on a form without including them in the calculation of candidates’ scores. A robust pre-testing policy is critical in establishing a healthy item bank. Additionally, pre-testing increases the capability to create multiple different scored blocks or LOFT pools, bolstering test security. Pre-testing also allows us to refine items based on actual candidate responses if needed, ensuring that they perform acceptably.
Security in Test Development
Throughout the item and form development processes, strict security measures are in place to protect the integrity of the examination content. From secure item banking systems to controlled access protocols, all materials have multiple safeguards against unauthorized access or breaches, preserving the fairness and validity of the testing process.
Conclusion
The processes of item writing and form assembly are integral to PSI’s commitment to excellence in testing. By adhering to best practice standards, utilizing advanced technologies, and upholding the highest levels of security and fairness, we commit to creating items and forms that meet our clients’ standards of high quality. Through these practices, PSI ensures that each test is a reliable, valid, and fair measure of the construct it aims to assess.