1.1 Researchers may sometimes choose to have each of their assessors perform replicate evaluations of a forced-choice discrimination test. This may occur, for example, when the researcher wants to explore assessor-to-assessor differences in acuity. Another situation in which a researcher may choose to have assessors perform replicated evaluations is if the researcher does not have access to a large enough pool of assessors to perform an unreplicated test that would have the level of sensitivity the researcher desires. The replicate evaluations increase the sensitivity of the test and provide insights related to differences in acuity among assessors, but the replicated evaluations affect the way in which the data from the test need to be analyzed. 1.2 This guide covers the analysis of data obtained from replicated forced-choice discrimination tests that would typically be analyzed using a test statistic based on the binomial distribution (for example, the triangle test, the duo-trio test, m-AFC tests, and so forth). This guide does not cover forced-choice discrimination tests that include a response bias, such as the A-not A Test or the Same-Different Test, which are typically analyzed using a test statistic based on the chi-square distribution. In a replicated discrimination test, each assessor evaluates several sets of samples. The multiple responses obtained from each assessor may be correlated. The binomial model that is typically used to analyze data from forced-choice discrimination tests of the type covered in this guide assumes that all responses are independent. Correlated responses from replicated discrimination tests violate this assumption invalidating the standard binomial analysis. This guide presents models for analyzing replicated discrimination tests that recognize the correlations that arise from replicated evaluations and yield statistically accurate results. 1.2 The models account for assessor-to-assessor differences in discriminatory ability. 1.3 This guide also presents the special considerations that need to be addressed when planning and executing a replicated discrimination test. 1.4 This guide does not address cases in which the assessors probability of success changes from one trial to the next, for example, as a result of learning or sensory fatigue. 1.5 This guide applies to situations in which all assessors perform the same number of replicated evaluations. 1.6 Because replicated discrimination tests often involve small numbers of assessors, it is essential that the researcher ensure that the assessors who do participate in the test represent the population of individuals to whom the test results are intended to apply.
Keywordsforced-choice discrimination test; replicated assessments; statistical analysis; beta-binomial model
The title and scope are in draft form and are under development within this ASTM Committee.Back to Top
Negative Votes Need Resolution