1 ...8 9 10 12 13 14 ...18 Be ready to defend your factors. As an experimentalist, there will be times when you work with a client, or with a theoretician, who fails to understand what cannot be measured. Or she may need guidance to tolerate uncertainty in measurements.
2.4.5 Similarity and Dimensional Analysis
Nature does not know how big an inch is (unless you experiment on inchworms), nor how big a centimeter is. The laws of physics are independent of the length, mass, and time scales familiar to us. For this reason, similarity analysis and the Buckingham Pi Π method are tools to cast the physics into nondimensional parameters which are independent of scale.
Perhaps the most familiar nondimensional parameter is the Mach number, the ratio of speed to the speed of sound. A fighter flying at Mach 2 is traveling at twice the speed of sound.
In thermo‐fluid physics, we use various nondimensional parameters, including Reynolds number (Re), Strouhal number (St), Froude number (Fr), Prandtl number (Pr), Mach, etc. We will see these again in later chapters. The Reynolds number is a ratio relating size, speed, fluid density, and viscosity. When NASA tests a model plane in a wind tunnel, it matches the Re of the model to that of the full‐size plane. Experimental results for the model plane and full‐size plane relate even better by simultaneously matching Mach number. And so forth as more parameters match.
As an expert in your field, you know which nondimensional parameters pertain to your experiment. The applicability of your measurements expands via nondimensional parameters. In your experiment, plan to ensure that you record all the factors, including environmental factors, so that all pertinent nondimensional parameters can be reported.
Upon reflection, the value percent (%) is likely the most familiar nondimensional number.
2.4.6 Listening to Our Theoretician Compatriots
Experimentalists and theoreticians need each other.
Richard Feynman, whose quote leads this chapter, was an experimentalist as well as a theoretician.
Einstein, whose paraphrased quote lead off Chapter 1, received his Nobel Prize for explaining experiments on the photoelectric effect. Einstein's theory of Brownian motion showed that prior experiments provided indirect evidence that molecules and atoms exist.
Yet just as Feynman stated, Einstein's theory of general relativity was “just a theory” until Arthur Eddington gave it experimental verification during a total solar eclipse in 1919.
NASA provides a good example of the interdependence of theory and experiment. The National Advisory Council on Aeronautics (NACA) was the precursor of NASA; “Aeronautics” is the first A of NASA. As airplane designs rapidly advanced during the 1900s, NASA deliberately adopted a four‐pronged approach: theory, scale‐model testing (wind‐tunnel experiments), full‐scale testing (in‐flight experiments), and numerical simulation (computational models verified by experiment). Each of the first three prongs have always been essential (Baals and Corliss 1981). Since the 1980s, numerical simulation has aided theory. Theory and experiment need each other. Since our numerical colleagues often refer to their “numerical experiments,” we do advocate an appropriate way to report the uncertainties of their results, just as we experimentalists do.
The science of fluid flow remains important, as another quote (from a personal letter) from Feynman makes clear:
Turbulence is the most important unsolved problem of classical physics.
Feynman spoke of basic turbulence. Turbulence can be further complicated by heat transfer; yet more complicated by mass transfer; yet more by chemical reactions or combustion; yet more complicated by electromagnetic interactions. Turbulence is key for weather, for breath and blood, for life, for flight, for circulation within celestial stars and their evolution. Turbulence remains unsolved to this day.
To consider more viewpoints, we include three panels:
Panel 2.1, “Positive Consequences of the Reproducibility Crisis”
Panel 2.2, “Invitations to Experimental Research, Insights from Theoreticians”
Panel 2.3, “Prepublishing Your Experiment Plan”
This text focuses on experimental strategies, planning, techniques of analysis, and execution. That is our expertise, in addition to thermo‐fluid physics. We have taught experimental planning to students in many fields using draft notes of this text for more than 60 years.
Panel 2.1Positive Consequences of the Reproducibility Crisis
As researchers and instructors, we have been promoting experimental repeatability and uncertainty analysis for more than 60 years. When the work of Dr. J.P.A. Ioannidis brought the Reproducibility Crisis in the medical field to public awareness, we welcomed the positive impact it produced.
Two papers by Dr. Ioannidis in 2005 brought the Reproducibility Crisis to the fore. One was the Journal of the American Medical Association ( JAMA ) article mentioned in Chapter 1, “Contradicted and Initially Stronger Effects in Highly Cited Clinical Research” (Ioannidis 2005a). The second was “Why Most Published Research Findings Are False” (Ioannidis 2005b).
The two 2005 articles by Dr. Ioannidis appear to be a watershed moment for science. In various scientific disciplines, researchers have produced guidelines adopted by major publishers.
Going deeper into the 2005 JAMA article, Dr. Ioannidis chose a notably high criteria for the publications he evaluated. He considered only:
“All original clinical research studies published in 3 major general clinical journals or high‐impact‐factor specialty journals
in 1990–2003 and
cited more than 1000 times in the literature…”
Dr. Ioannidis then compared these “results of highly cited articles … against subsequent studies of comparable or larger sample size and similar or better controlled designs. The same analysis was also performed comparatively for matched studies that were not so highly cited.”
Although part of the same article, this collection of research studies fared better than those mentioned in our Chapter 1. “Of 49 highly cited original clinical research studies, 45 claimed that the intervention was effective. Of these, 7 (16%) were contradicted by subsequent studies, 7 others (16%) had found effects that were stronger than those of subsequent studies, 20 (44%) were replicated, and 11 (24%) remained largely unchallenged.” 5
In the same year, Dr. Ioannidis published “Why Most Published Research Findings Are False,” a provocative title. Although the wording appears to encompass all fields, the examples in the article were medical experiments. In order to make his evaluations, he adopted a key metric called the “Positive Predictive Value” (PPV). From this research, Dr. Ioannidis deduced the following “corollaries about the probability that a research finding is indeed true”:
Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.
Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.
Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.
Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical models in a scientific field, the less likely the research findings are to be true.
Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.
Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.
Читать дальше