What Does It Mean to Be Green?
The September/October green practices and technologies issue of SN was an interesting one, congratulations! The Florida Institute of Technology does an annual International Sustainability Forum jointly with the Budapest University of Technology and Economics. The meeting alternates between Budapest, Hungary, and Melbourne, Fla. The next meeting is in
The “Meetings Go Green” article raises the key overarching question of what it means to be green. Does green represent the “looking good” feeling or the much more difficult life cycle analysis-based assessment? Reduce, reuse and recycle are all important. The reuse of badges is a “no-brainer.” However, the cited cost of 75 cents per badge seems a little high, but if they are recycled that is OK. To say that food service for 2,200 people saves 1890 lbs. of plastics going to a landfill ignores the fact that thermoplastics are recyclable. One needs to create a market for such plastics. What are the manufacturing, transportation, storage, cleaning system, water, detergent, energy, labor and end disposal costs for the replacement glass and china during the life cycle? I have not seen that calculated. The answer might be surprising.
Last year in one of the many meeting magazines I receive, an author was arguing on “green” grounds for the use of crocks of butter, cream cheese and jelly rather than the individual serving packages. Rather than being able to provide for actual use, one has large serving containers, which generate substantial waste because the unused portion cannot and should not be brought forward to the next client. As above, the whole issue of all of the associated costs of manufacture, transportation, storage, cleaning, labor, separate containers for bulk food, disposal of waste, etc., are ignored. Also ignored are the health risks.
What carbon offsets are legitimate? Who is certifying those programs? Do local foods and organics survive a detailed life cycle analysis assessment for all inputs and outputs?
The absence of printed programs or handouts is hardly green if the meeting objectives are not fully achieved. Providing an electronic agenda is hardly green if I have to print it out to use it effectively. It just transfers costs.
True sustainability involves environmental, economic and political aspects. All must be in place. In the absence of life cycle assessment we actually do not know if sustainability criteria are met.
The DataPoints column by Thomas Murphy and Peter Fortini in the September/October issue of SN provided some interesting information, but it omitted an important factor in reporting test results, that of reference material uncertainty. My instrument has a computer that can be programmed to print out any number of digits (within reason). So let us assume that we program it to print out 6% to six digits: 6.000000. The precision of the instrument is 0.001%, so we are not justified in reporting any more than that. Now we are down to 6.000. The reference material used to calibrate the instrument has an uncertainty of 0.02%. Therefore, the product can be certified to no better than 0.02%. For internal use, I can report to 0.001% if the numbers are going to be used to measure variation within the product, not absolute results. I know statisticians will argue about significant figures, but my results are no better than the reference material used to calibrate the instrument.
Mr. Creasy brings up the good point that instrument error may be just one of the contributions to the variability of the test result. Other factors might be test specimen preparation and sub-sampling of the submitted material. Uncertainty includes all of the factors involved in the test. You can rest assured that test method precision and uncertainty will be future topics for the DataPoints column.
However, we cannot agree with a policy of rounding results based on overall uncertainty, in general. The purpose of significant digits used to record or to report a measurement or test result is not just to approximate the quantity we are trying to measure but also to capture the result itself including its systematic and random errors. The choice of repeatability as the sigma for the 0.05 to 0.5 sigma rule recommended for data reporting was deliberate, as repeatability is close to the bottom of the error hierarchy.