Standardization News Search

Magazines & Newsletters / ASTM Standardization News

feature

November/December 2010
Feature

10 Best Practices of Good Laboratories

Lessons from a Laboratory Career

Beyond the simple chemistry lab directions of “Don’t taste the chemicals; don’t sniff the chemicals; don’t look too closely at the chemicals — and wear your safety glasses!” are best practices that apply to test organizations across all scientific and engineering disciplines.1

There are 10 practices that laboratories, test organizations and individual analysts should keep in mind when performing daily analytical tasks. Many professionals may see these 10 practices as no-brainers. That’s a good thing. However, all of us who are willing to tell the truth will admit there have been times when we might have slipped a bit on one or two. These “slips” can affect test result validity.

The importance of accurate results cannot be overstated. Test results change people’s lives. This is eminently true in the medical and forensic fields. It is also true for those of us who test products, sometimes mundane products. Getting the right answer matters. To good laboratories, these best practices become routine procedures; to good analysts they become habits. The goal is to produce quality results. [Read more about ASTM International programs that contribute to best laboratory practices.]

The overarching rule for all these practices is: If you didn’t document it — you didn’t do it. Documentation is critical. If documentation doesn’t exist, create it; otherwise … re-read the rule.

For laboratories and test organizations that are considering applying for accreditation, following these 10 practices will be a significant step toward achieving that goal.

1. Establish and Follow Procedures

Develop basic procedures, for example, to receive, identify, assign, cue, test, report and dispose of samples. For some organizations, a comprehensive quality system with an electronic laboratory information management system is appropriate. However, what is necessary can be a simple, orderly, faithfully followed process.

Samples should not languish unassigned in a receiving area; they should be logged in, given a unique identifier and assigned to an analyst or analytic team within one to two working days of arrival at the laboratory. Although some LIMS developers will rightly claim that the unique identifier need not contain specific sample information, information such as a customer code or arrival date is often useful in sample handling. Depending on the laboratory, sample assignment to a particular analyst or team may be based on the sample type, workload or other criteria. To allow analysts to plan their work, once assigned, samples should be moved to a queue zone for the team or analyst. While first in-first out is often the rule, holding a few days to allow for “batching” may be in order for some types of samples.

Post-analysis sample disposition should also follow an orderly process. Inventory records should include details that account for environmental and safety rules. Where legal action may ensue, chains of custody must be kept valid, and samples may have to be retained or returned to the submitter. Legal actions can be very lengthy. Therefore, when a laboratory retains samples, orderly storage is needed.

2. Maintain Your Proficiency

Analysts must have the education, training and experience, acquired through formal education or on-the-job training, sufficient to perform assigned analytic duties. Education and apprentice training provide the foundation for and give a snapshot of an analyst’s capability, but they do not guarantee a sustained capability. This best practice assists analysts in maintaining and documenting capability.

Periodically, analysts should participate in proficiency testing, which shows that the analyst maintains capability over time. That gives customers and stakeholders a greater level of assurance that the laboratory is maintaining its ability to perform a test method in a manner that produces valid results. (For accredited laboratories, periodic proficiency testing is required.)

If a laboratory decides to expand its capabilities, staff analysts will need training on the new tests. Continuing education affords an analyst the opportunity to expand capability in a current or a new technology area. When purchasing a new instrument, laboratories should give strong consideration to including the training package offered by the manufacturer. The laboratory should plan for proficiency tests in the new area.

3. Validate Methods

Method validation needs and techniques will change as the group using a particular method changes. Laboratories that work in fields with methods in widespread use, e.g., environmental and clinical laboratories, have more established techniques than fields with a smaller community of interest. Research laboratories that develop new test methods may offer their work for others to reproduce, and thus, validate.

For testing and calibration laboratories, the goal in selecting a test method is to choose one that produces an accurate result within an acceptable uncertainty that can be reproduced by multiple analysts. Test methods originate from various sources: standards development organizations, equipment and instrument manufacturers, universities, consortia and other organizations and individuals. Individual laboratories will develop new or modify existing methods to fit specific test needs they encounter. With the possible exception of SDOs that use a rigorous consensus development process, the validity of methods developed in any other venue cannot be assumed.

It is not necessary that every laboratory use the same method to test the same object. However, every laboratory must be able to defend its chosen method as capable of giving accurate results, which is achieved through method validation by multiple organizations and/or analysts who run the method using the same test object that has a predictable result. If one exists, a traceable standard reference material from the National Institute for Standards and Technology or an established test artifact with a known result should be used. Successful validation requires that the results of multiple runs are all within an acceptable uncertainty value, that is, a statistically acceptable margin of error.

For methods where multiple laboratories are few, or nonexistent, validation can be problematic. A potential validation process would be: 1) to have different analysts at the same laboratory run the method using the same SRM or test artifact and/or 2) to create relevant control charts where test results from one or more analysts are tracked over time. On the rare occasion when only one proficient analyst is available, that analyst should perform multiple independent runs of the protocol over time using the best SRM available. Control charts can be used to measure the reproducibility, accuracy and uncertainty of the method. Someone other than the analyst should review the data and control charts to assert that the method has been validated with a stated uncertainty range.

Research laboratories often are faced with a situation where it is the only one running the test as well as the added challenge of a method without a history of use. Validation begins with clearly communicating the procedures used to develop the method, typically through publication in a scientific journal. Publication enables others: 1) to assess the method for systemic errors; and 2) to reproduce the work to obtain the same results as the initial work. If the sample tested by both the initial researcher and those reproducing the work is an SRM, the new method has taken a significant step toward validation.

4. Use Traceable Standard Reference Materials

Reference material uses include validating methods that help ensure accurate data from individual test runs, calibrating instruments and assessing analyst proficiency. In the United States, a NIST standard reference material is considered the “gold standard” for that material. NIST has more than a thousand different SRMs covering diverse technologies.2 The results of analyses backed by NIST-traceable SRMs are widely accepted as valid.

An SRM must be fit for its intended use, for example:

  • In an analytical chemical laboratory for a quantitative analysis, a series of reference materials with known elemental contents encompassing the analysis range of interest; or
  • In a medical laboratory, a known virus or bacteria for qualitative analysis, or a serum with a known glucose content for quantitative diabetes testing.

While substantial in number, NIST SRMs do not cover all laboratory analysis needs. Standards from other organizations are often valuable. Surplus test items may be retained and used as reference materials, particularly by laboratories that perform repetitive testing of an item and have unusual analytical requirements, for example, elemental content. A typical benefit of retained items for repetitive testing is that they almost always have the same matrix. If the test is nondestructive, for example, X-ray fluorescence, a retained item has an almost unlimited life span.

In all cases, maintain high quality reference materials to maximize their usable life, and when you find a good one, don’t let it out of your sight.

Closely related is the purity of other chemicals used in testing. Solid chemicals used to create calibration curves for determining elemental content and acids used to dissolve the solids, etc., with rare exception, all must be reagent grade or better.3 Test methods should identify the lowest grade of the chemicals required for the method. Results are only as good as the weakest component in the system.

5. Run in Duplicate

The purpose of duplicate (sometimes triplicate) testing is to add to the confidence that the test run has produced good data for the test object. Replicate data that is in agreement is a good measure of method reproducibility but does not prove data accuracy (validity). If the same test run includes a reference material, then the confidence in the validity of the data for the test object is significantly raised. If the object’s replicate test data is not in agreement, one or more of the data points may be invalid; the object should be retested and/or the procedure should be reviewed.

Take care not to run out of sample. There are extenuating circumstances when replicate testing is not possible, such as an insufficient amount of the test object from a surface wipe or fragmental samples in some forensic environments; the high cost of the object when the test is destructive, for example, precious metals; or an overwhelming number of different but related objects when time is of the essence, for example, samples from different places to assess “how clean is clean” when cleaning a contaminated building. The use of reference materials as controls (see below) becomes paramount when only a single test object sample is included in a test run. When testing occurs in sequence with little human intervention, such as with automated laboratory analyzers, controls should be placed at or very near the beginning and end of a test run, at a minimum. When there are numerous test run samples, additional controls should be spaced appropriately throughout the sample set to give confidence that the data from each test object is valid.

6. Keep Original Data

Whether data is first recorded in electronic/digital form, in a notebook or on the closest piece of scrap paper, keep it. In modern laboratories, handwritten original data is no longer the norm, but if data is first recorded by hand, that document becomes critical to maintain. No matter how often the data is transposed to electronic spreadsheets, databases or any other media, the initial point at which the data is recorded must become part of the documentation.

Electronic data acquisition is the norm in a laboratory today, particularly with automated analyzers used in laboratories for all scientific disciplines. The advantage of digital data is that great quantities of it can be stored on relatively small devices, for example, CDs or USB flashdrives. However, consideration must be given to recover data from outdated electronic media. Laboratory procedures should address how long test results will be maintained, which depends on the organization’s business, customer needs and the potential for legal actions. For this time period, laboratories should be able to read original data, either by maintaining equipment or by transferring data to new media. (Addendum to the golden rule: if you can’t access a document, you didn’t document it sufficiently.)

7. Assign Instruments and Equipment to Analysts

Scientific instruments are temperamental tools; they need individual attention. The more sophisticated the instruments are, the more temperamental they can become, particularly if labeled research grade. When an instrument is used mainly by one staff member, usage time, calibration, maintenance and other issues are minimized. However, a good practice is to formally assign that analyst the responsibility for keeping the instrument operational and for alerting management to malfunctions. When an instrument is used by multiple staff members, assign these responsibilities to a primary user, who should schedule usage time for other staff members, provide training and mentoring to new users, ensure that any instrument control charts are current and ensure that calibration and maintenance occur on schedule.

The primary user should also have on hand a reasonable store of basic repair parts (lamps, ferrules, tubing, etc.) and basic consumables, such as carrier gases. This preparation will reduce instrument downtime. If an instrument is out of order, the primary user should determine with laboratory management if there are sufficient funds to call for a repair, and when funds are available, see that the repair is completed. The primary user should also alert other users about the problem, perhaps with a simple, conspicuous “out of service” tag on the apparatus.

8. Calibrate Instruments

Instrument calibration, for this discussion, is confirming that an instrument is working correctly before performing a test method, whether a simple balance or a sophisticated analyzer. (For accredited laboratories, periodic instrument calibration by certified outside organizations is often required.)

An incorrect result with an SRM may indicate a problem with instrument performance. An experienced analyst will often know where the problem lies by “listening” to the instrument. Often a simple recalibration, such as running a procedure to reset the instrument’s electronic system, can cure the situation. If the erratic behavior cannot be cured by simple recalibration, the assigned analyst should resolve the problem either with a call for a professional calibration service or an instrument technician.

9. Use Control Charts

Control charts are excellent tools for several uses, including those already noted. A control chart enables a laboratory to track the results of a reference material and/or control sample at the end of each test run. It gives the laboratory a snapshot of test run quality and a picture of the quality of the laboratory’s results for that particular test over time.

A Shewhart control chart plots individual test results for a reference material or control sample over time.4 While Shewhart set a 3-sigma deviation from the mean as acceptable control limits, control limits can be set on a case-by-case basis. Customers of the test method should have input into setting control limits.

Control charts typically are used to track test performance for the organization as a whole, but they may be set up for each instrument, analyst, variable or combination thereof. Control charts give an immediate, visual and measurable indication of whether each test run has been performed correctly. When a control sample yields a result outside the control limits, the test run accuracy is in question. Typically, laboratories will rerun the test. If the retest yields a result for the control sample that is within the control limits, the laboratory will continue with normal operations and report the results from the correct run. Both the original and rerun control results should be recorded on the control chart for future monitoring. Should control samples continue to yield results outside control limits, demonstrate a drift5 or other erratic behavior, the laboratory should not conduct this test until the problem is found and fixed. Control charts are valuable in that they can prevent questionable results from being reported to customers. However, should previously reported results come into question, customers receiving those reports should be notified. (For accredited laboratories, notification is a requirement.)

10. Document Everything and Maintain Good Records

To return to the golden rule of, “If you didn’t document it, you didn’t do it,” organized records benefit a test organization. An ordered records system can be a prima facie indication to customers, auditors, government and legal authorities, and others that the organization follows its procedures. Records provide a fount of information for training new staff members to perform the stable of methods of the laboratory. When customers request copies of their test results, they are readily available, which makes for satisfied and repeat customers. More important, when test results have to be defended, these documents are critical.

Laboratories that pursue accreditation will need documents on method validation, proficiency testing, instrument calibration and most of the practices covered here. If the laboratory performs testing for regulated domains, records will be required to prove competency for that domain. In legal actions, for example, when the test object causes harm, records can prove that the test organization followed the applicable test procedures and thus reduce or eliminate liability. When documents are not available, questions can be brought to bear regarding test result validity, and potentially, organizational competence or negligence. The test organization is vulnerable to loss of customers, fines, penalties or other consequences to individual analysts, managers and owners.

The need for documentation occurs at different points while conducting a test, so good laboratory practice places continuing responsibility on the individual analyst to initiate and maintain documents. The person who performs a function is responsible for documenting it and storing the record in its proper place. It is a reminder that quality is the responsibility of each analyst and must be incorporated into every aspect of an analysis, including the paperwork. As the adage goes, “Quality is built in, not inspected in.”

References
1. The author would like to acknowledge that some of this work is an outgrowth of a study on conformity assessment for the U.S. Department of Homeland Security performed under funding by the DHS Office of Standards, Division of Test and Evaluation, and the Standards, Science and Technology Directorate. Additionally, the author is indebted to Eric Sylwester, Ph.D., of the Homeland Security Studies and Analysis Institute, and Robert Tuohy III for their thoughts and suggestions that were invaluable in enhancing these best practices.
2. For more information on NIST SRMs, see www.nist.gov/srm.
3. The term reagent grade is a term of art for chemicals of acceptable purity to be used in the most accurate level of chemical analysis. Acceptable purities are often set in published standards of a standards development organization, e.g., ASTM International, AOAC International.
4. For more information about the control charts developed by Walter A. Shewhart, see “Shewhart Control Chart,” at www.itl.nist.gov/div898/handbook/mpc/section2/mpc221.htm and “Statistical Quality Control Using Control Charts,” at www.gigawiz.com/qc.html#VarQC.
5. A drift is a steady fall or rise in a control sample test result that, if continued, will eventually cause the result to fall outside the control limits.

Robert Zimmerman is a fellow at the Homeland Security Studies and Analysis Institute, Arlington, Va., an operating unit of Analytical Services Inc. He holds a Master of Science degree in chemistry. During his career with U.S. Customs and Border Protection, he held positions as director of the New Orleans Field Laboratory and the CBP Research Laboratory in Springfield, Va. Both laboratories are accredited to the International Organization for Standardization’s ISO 17025, General Requirements for the Competence of Testing and Calibration Laboratories.