Standard Withdrawn, No replacement   Last Updated: Apr 24, 2017 Track Document
ASTM E2171-02(2013)

Standard Practice for Rating-Scale Measures Relevant to the Electronic Health Record (Withdrawn 2017)

Standard Practice for Rating-Scale Measures Relevant to the Electronic Health Record (Withdrawn 2017) E2171-02R13 ASTM|E2171-02R13|en-US Standard Practice for Rating-Scale Measures Relevant to the Electronic Health Record (Withdrawn 2017) Standard new BOS Vol. 14.01 Committee E31
$ 0.00 Out of stock

Significance and Use

4.1 The simplicity and practicality of Rasch's probabilistic scale-free measurement models have brought within reach universal metrics for educational and psychological tests, and for rating scale-based instruments in general. There are at least 3 implications to the application of Rasch's models to the health-related calibration of universal metrics for each of the variables relevant to the Electronic Health Record (EHR) that are typically measured using rating scale instruments.

4.1.1 First, establishing a single metric standard with a defined range and unit will arrest the burgeoning proliferation of new scale-dependent metrics.

4.1.2 Second, the communication of the information pertaining to patient status represented by these measures (physical, cognitive, and psychosocial health status, quality of life, satisfaction with services, etc.) will be simplified.

4.1.3 Third, common standards of data quality will be used to evaluate and improve instrument performance. The vast majority of test and survey data quality is currently almost completely unknown, and when quality is evaluated, it is via many different methods that are often insufficient to the task, misapplied, misinterpreted, or even contradictory in their aims.

4.1.4 Fourth, currently unavailable economic benefits will accrue from the implementation of measurement methods based on quality-assessed data and widely accepted reference standard metrics. The potential magnitude of these benefits can be seen in an assessment of 12 different metrological improvement studies conducted by the National Science and Technology Council (Subcommittee on Research, 1996). The average return on investment associated with these twelve studies was 147 %. Is there any reason to suppose that similar instrument improvement efforts in the psychosocial sciences will result in markedly lower returns?

4.2 Until now, it has been assumed that the Practice E1384 would necessarily have to stipulate fields for the EHR that would contain summary scores from commonly used functional assessment, health status, quality of life, and satisfaction instruments. This is because standards for rating scale instruments to date have been entirely content-based. Those who have sought “gold” or criterion standards that would command universal respect and relevance have been stymied by the impossibility of identifying content (survey questions and rating categories) capable of satisfying all users' needs. Communication of patient statistics between managers and clinicians, or payors and providers, may require one kind of information; between providers and referral sources, other kinds; between providers and accreditors, yet another; among clinicians themselves, still another; and even more kinds of information may be required for research applications.

4.2.1 For instance, payors may want to know outcome information that tells them what percentage of patients discharged can function independently at home. A hospital manager, referral source, or accreditor might want to know more detail, such as percentages of patients discharged who can dress, bathe, walk, and eat independently. Clinicians will want to know still more detail about amounts of independence, such as whether there are safety issues, needs for assistive devices, or specific areas in which functionality could be improved. Researchers may seek even more detail yet, as they evaluate differences in outcomes across treatment programs, diagnostic groups, facilities, levels of care, etc.

4.2.1.1 Members of each of these groups have, at some time, felt that their particular information needs have not been met by the tools designed and developed by members of another group. Despite the fact that the information provided by these different tools appears in many different forms and at different levels of detail, to the extent that they can be shown to measure the same thing, they can do so in the same metric. This is the primary result of the introduction of Rasch's probabilistic scale-free measurement models. The different purposes guiding the design of the instruments will still continue to impact the two fundamental statistics associated with every measure: the error and model fit. More general, and also less well-designed instruments, will measure with more error than those that make more detailed and consistent distinctions. Data consistency is the key to scale-free measurement.

4.3 The remainder of this document (1) identifies, in Section 5, the fields in the current Practice E1384 targeted for change from a scale-dependent to a scale-free measurement orientation; (2) lists referenced ASTM documents; (3) defines scale-free measurement terms, often contrasting them with their scale-dependent counterparts; (4) addresses the significance and use of scale-free measures in the context of the EHR; (5) lists, in Annex A2, scientific publications documenting relevant instrument calibrations; (6) briefly presents some basic operational considerations; (7) lists minimum and comprehensive arrays of EHR database fields; and (8) lists, in Annex A3, the references made in presentation of the measurement theory, estimation methods, etc.

4.4 Publications of calibration studies referencing this practice and the associated standard practice should require:

4.4.1 The use of measures, not scores, in all capture of data from the EHR for statistical comparisons;

4.4.2 The reporting of both the traditional reliability statistics (Cronbach's alpha or the KR20) and the additive, linear separation statistics (Wright & Masters, 1982), along with their error and variation components, for both the measures and the calibrations;

4.4.3 A qualitative elaboration of the variable defined by the order of the survey questions or test items on the measurement continuum, preferably in association with a figure displaying the variable;

4.4.4 Reporting of means and standard deviations for each of the three essential measurement statistics, the measure, the error, and the model fit;

4.4.5 Statement of the full text of at least a significant sample of the questions included on the instrument;

4.4.6 Specification of the mathematical model employed, with a justification for its use;

4.4.7 Specification of the error estimation and model fit estimation algorithms employed, with mathematical details and justification provided when they differ from those routinely used;

4.4.8 Evaluation of overall model fit, elaborated in a report on the details of one or more of the least and most consistent response patterns observed;

4.4.9 Graphical comparison of at least two calibrations of new instruments from different samples of the same population to establish the invariance of the item calibration order across samples;

4.4.10 Graphical comparison of measures produced by at least two subsets of items on new instruments to establish the invariance of the person measure order across scales (collections of items);

4.4.11 Graphical comparison of new instrument calibrations with the calibrations produced by other instruments intended to measure the same variable in the same population, to establish the potential for sample-free equating of the instruments and establishment of reference standards;

4.4.12 At least a useable prototype of the instrument employed, with the worksheet laid out to produce informative quantitative measures (not summed scores) as soon as it is filled out; and

4.4.13 Graphical presentation of the treatment and control groups' measurement distributions, for the purpose of facilitating a substantive interpretations of differences' significance.

Scope

1.1 This standard addresses the identification of data elements from the EHR definitions in Practice E1384 that have ordinal scale value sets and which can be further defined to have scale-free measurement properties. It is applicable to data recorded for the Electronic Health Record and its paper counterparts. It is also applicable to abstracted data from the patient record that originates from these same data elements. It is applicable to identifying the location within the EHR where the observed measurements shall be stored and what is the meaning of the stored data. It does not address either the uses or the interpretations of the stored measurements.

Language unavailable
Format unavailable
Related
Reprints and Permissions
Reprints and copyright permissions can be requested through the
Copyright Clearance Center