http://creation.com/science-fraud-epidemic?utm_media=email&utm_source=infobytes&utm_content=us&utm_campaign=emails
Why the epidemic of fraud exists in science today
Page spread from the November 1999 National Geographic article on the ‘missing link’ fossil Archaeoraptor, which later proved to be a hoax.
The Piltdown hoax is one of the most famous cases of fraud in science.1 Many Darwinists, though, claim that this case is an anomaly, and that fraud is no longer a problem today. However, the cases of fraud or deception in the field of evolution include not only the Piltdown Man, but Archaeoraptor, the peppered moth, the Midwife Toad, Haeckel’s embryos, Ancon sheep, the Tasaday Indians, Bathybius haeckelii and Hesperopithecus (Nebraska Man)—the missing link that turned out to be a pig.2-8 Actually, fraud as a whole is now ‘a serious, deeply rooted problem’ that affects no small number of contemporary scientific research studies, especially in the field of evolution.9 Scientists have recently been forced by several events to recognize this problem and try to deal with it.10
Most of the known cases of modern-day fraud are in the life sciences.11 In the biomedical field alone, fully 127 new misconduct cases were lodged with the Office of Research Integrity (US Department of Heatlh & Human Services) in the year 2001. This was the third consecutive rise in the number of cases since 1998.12 This concern is not of mere academic interest, but also profoundly affects human health and life.13,14 Much more than money and prestige are at stake—the fact is, fraud is ‘potentially deadly’, and in the area of medicine, researchers are ‘playing with lives’.15 The problem is worldwide. In Australia misconduct allegations have created such a problem that the issue has even been raised in the Australian Parliament, and researchers have called for an ‘office of research integrity’.16
One example is the widely quoted major immunological research studies related to kidney transplantation done by Zoltan Lucas (M.D. from Johns Hopkins and Ph.D. in biochemistry from MIT) that recently were found to contain fraudulent data.17 Dr Lucas was an associate professor of surgery at Stanford University. His graduate student, Randall Morris, discovered that Lucas had written reports on research that Morris knew had not been carried out. The reason Morris knew this was that he was to have been involved in the research! The studies were published in highly reputable journals and, no doubt, many other researchers also relied upon the results for their work. As a result of the modern fraud epidemic, a Nature editorial concluded:
‘Long gone are the days when scientific frauds could be dismissed as the work of the mad rather than the bad. The unhappily extensive record of misconduct suggests that many fraudsters believe their faked results, so attempts at replication by others represent no perceived threat.’18
Or they actually believe that no-one will attempt to replicate their work, at least for some time (much science work is not replicated, but medical research is much more likely to be replicated, due to its importance for human health, although it may take years). The fraud problem is so common that researchers who maintain a clean record are sometimes given special recognition, as was Italian scientist Franco Rasetti: ‘Today, we hear a great deal about scientific fraud, and commissions and committees on scientific ethics abound. For Rasetti, scientific honesty was axiomatic and automatic.’19
Fraud exists to such an extent that one study about the problem concluded that ‘science bears little resemblance to its conventional portrait’.20 Although more common among researchers working alone, ‘fakery still abounds’ even in group projects watched over by peer review.21 The accused include some of the greatest modern biologists, and the problem exists at Harvard, Cornell, Princeton, Baylor, and other major universities. In a review of fraud, a Nature editorial noted many cases involved not young struggling researchers, but rather experienced, well-published scientists. This Nature editorial concluded,
‘that the dozen or so proven cases of falsification that have cropped up in the past five years have occurred in some of the world’s most distinguished research institutions—Cornell, Harvard, Sloan-Kettering, Yale and so on—and have been blamed on people who are acknowledged by their colleagues to have been intellectually outstanding. The pressure to publish may explain much dull literature, but cannot of itself account for fraud.’22
The fraud ranges from fudging data to plagiarizing large sections from other articles. A Nature editorial concluded the plagiarism was growing, especially in molecular biology.23 To prevent ‘leaks’, some researchers have even put incorrect information in their papers, correcting them just prior to publication.24 And the problem will likely get worse: we can expect misconduct to occur more often in the future—in particular in biomedicine, where the pressure to publish is very high.25
Fraud among Darwinian researchers
The scientific method is an ideal, but it is especially difficult to use to ‘prove’ certain science hypotheses, such as those involving origins science. A good example of this difficulty is ‘the theory of evolution (which) is another example of a theory highly valued by scientists … but which lies in a sense too deep to be directly proved or disproved’.26 A major issue in dealing with this problem is that no small amount of arrogance exists within the scientific community. Some scientists believe that they know best, and only they have the right to ask questions—and if they don’t, no-one else should.4
One famous case of evolution fraud, that of Viennese biologist Paul Kammerer, was the subject of a classic book titled The Case of the Midwife Toad.6 Kammerer painted ‘nuptial pads’ with India ink on the feet of the toads he was studying. Yet, even though his work, which supposedly supported the Lamarckian theory of evolutionism, was exposed, it was used for decades to support the particular evolution ideology of Soviet scientists such as Trofin D. Lysenko.27 In a similar case, William Summerlin faked the results of a test in the 1970s simply by drawing black patches on his white test mice with a felt-tip pen.28
A recent case of fraud in evolution is that of Archaeoraptor, the ‘evolutionary find of the century’ that purportedly proved bird-dinosaur evolution. The National Geographic Society ‘trumpeted the fossil’s discovery … as providing a true missing link in the complex chain that connects dinosaurs to birds’.3 The authenticity of Archaeoraptor, which ‘some prominent paleontologist’ saw as ‘the long-sought key to a mystery of evolution’ was reviewed by Simons.3 Simons’ research concluded the fossil was a fraud. High-resolution x-ray CT work found ‘unmatched pieces, skillfully pasted over’.29 The fraud was also determined to be ‘put together badly—deceptively’29 involving ‘zealots and cranks’, ‘rampant egos clashing’, ‘misplaced confidence’ and ‘wishful thinking’.3 It was the Piltdown Man all over again. Simons adds that this is a story in which ‘none’ of those involved looks good.3
Another case involving Darwinism concerns ‘one of the world’s leading evolutionary biologists, Anders Pape Møller’, who has published over 450 articles and several books.30 A Science report noted that a
‘government committee has ruled that … Møller, is responsible for data fabricated in connection with an article that he co-authored in 1998 and subsequently retracted. … The charge … has cast a shadow over the relatively tight-knit world of behavioral ecology, the study of mating and other behaviors in an animal’s natural environment. … One point that’s indisputable is Møller’s reputation as a towering figure in the field. Møller has been a key proponent of the idea that traits such as long symmetrical tails in barn swallows, which attract potential mates, are a sign of beneficial genes. He has also shown that stress caused by environmental factors such as parasites can lead to the development of asymmetrical body parts.’30
A concern, as expressed by Oxford University evolutionary biologist Paul Harvey, is the astonishing ‘number of papers he writes with new results and analyses’ and these papers are now suspect,30 a fact that
‘has many journal editors pacing nervously. … Michael Ritchie of the University of St. Andrew, U.K., editor of the Journal of Evolutionary Biology and an officer of the societies that publish the journals Evolution and Animal Behaviour [said] “We need to work out what we should do and get it right. I don’t think there’s [sic] going to be any instant decisions”.’30
The problem first surfaced when a lab technician, Jette Andersen, claimed that a paper in the journal Oikos was based on fabricated data rather than Andersen’s data as Anders claimed. An investigation supported Andersen’s claim. Then concerns were raised over other papers. The fear now is that many of Møller’s papers are flawed. All are clearly suspect.
Some recent cases illustrate the seriousness of the problem
Unfortunately, medicine and biology, especially, have been hit hard by fraud. One study found 94 cancer papers ‘likely’ contained manipulated data.31 Two years later, many of the papers were still not retracted. This confirms the conclusion that ‘even when scientific misconduct is proven, no reliable mechanism exists to remove bad information from the literature’.31
Another case of medical fraud involved cardiologist Dr John Darsee of Harvard University Medical School. This case involved fabricating the data that formed the basis of his more than 100 publications over a period of about three years.32 This case illustrates how just a few persons can produce an enormous number of fraudulent publications. In a study of 109 of Darsee’s articles, the researchers found what can only be described as ‘bizarre’ data that could not be valid, numerical discrepancies, and numerous blatant internal contradictions.33 They also found appalling examples of errors or discrepancies that should have been discovered by the reviewers. The study concluded that the co-authors and reviewers that evaluated the papers were grossly deficient.
Another case involved a biology study that appeared to have ‘overturned a widely accepted theory on cell signaling’. The paper was retracted only
‘15 months after it was published. The retraction has rocked the cell-biology community and, say observers, has effectively ended the career of Siu-Kwong Chan, one of the paper’s co-authors. Gary Struhl, a Howard Hughes Medical Institute (HHMI) Investigator based at Columbia University, New York, and the senior author on the paper, issued the retraction on 6 February.’34In the retraction, Struhl claims that Chan,
‘a postdoc in his lab, has admitted misreporting or failing to perform crucial experiments described in the original paper (S.-K. Chan and G. Struhl Cell 111, 265–280; 2002). Struhl discovered a problem when he repeated some of Chan’s experiments. When he didn’t get the same results as Chan, Struhl says that he confronted his former postdoc, who had by this time moved to the Albert Einstein College of Medicine in the Bronx. “When confronted with this discrepancy, S.-K. Chan informed me that most of the results … were either not performed or gave different results than presented in the paper,” Struhl wrote in the retraction. “I therefore withdraw this paper and the conclusions it reports”.’
They had worked for five years on the project before publishing their results in October 2002.
How to measure deceit
Photos by Dr M. Richardson et al, ‘There is no highly conserved embryonic stage in the vertebrates: implications for current theories of evolution and development’, Anatomy and Embryology 196(2)91–106, 1997, Copyright Springer Verlag GmbH & Co., Germany. Reproduced by permission.
Ernst Haeckel created fraudulent drawings of embryos to increase the resemblance between them and to hide their dissimilarities (top row), in order to use the idea of embryonic recapitulation to promote Darwin’s theory of evolution. The photographs in the bottom row are of actual embryos. Amazingly, Haeckel’s drawings are still used today.
Even though Broad and Wade conclude that deceit in science has not been the exception but the trend from its beginning until today, it would be helpful to have quantitative measures of the extent of deception in science, both today and in the past. In the past 30 years, for example, do four percent of all scientific papers contain fudged data? Or is it six percent or 30 percent? The percentage depends on how we define fudging, and whether we include unconscious fudging (experimental error or bias). One percent may be considered minor, or, maybe, depending upon our vantage point, epidemic.
If AIDS (acquired immune deficiency syndrome) affected just one-half of one percent of the world population, it would be considered epidemic (or more accurately, pandemic). Furthermore, even if we replicate an experiment and find that the results do not conform to those in the original study, it is still difficult to ‘prove’ deceit because dishonesty in science can often be covered up rather easily. If a scientist claims certain results were produced, unless one’s laboratory assistant testifies that, indeed, the data were fudged, the most we can prove is that, for some reason, replication consistently fails to support the original result.
Reasons why deceit is common
The present system of science actually encourages deceit. Careers are at stake, as are jobs, grants, tenure and, literally, one’s livelihood.35 This is partly a result of the ‘publish or perish’ endemic in academia. Broad and Wade point out that ‘grants and contracts from the Federal government … dry up quickly unless evidence of immediate and continuing success is forthcoming’. The motivation to publish, to make a name for oneself, to secure prestigious prizes, or be asked to join an educational board, all entice cheating. Broad and Wade’s frightening conclusion is, ‘corruption and deceit are just as common in science as in any other human undertaking’. As Broad and Wade stress, scientists ‘are not different from other people. In donning the white coat at the laboratory door, they do not step aside from the passions, ambitions, and failings that animate those in other walks of life.’36
Fraud usually does not involve totally making up data, but most often involves alterations, ignoring certain results, and fudging the data enough to change a close, but non-statistically significant result into a statistically significant difference at the alpha < .05 level. Whether intentional deceit is involved is not easy to determine. Dishonesty cannot be easily disentangled from normal human mistakes, sloppiness, gullibility or technical incompetence. Vested interests operate to prove one’s pet theories, causing researchers to don blinders that impede them from seeing anything other than what they want to see. Once theories are established, they tend to be written in stone, and are not easily overturned regardless of the amount of new information that may contradict the now hallowed ‘written-in-stone’ theory.
Among the other reasons for deceit are the fact that comprehensive theories are the goal of science, not a collection of facts. Because it is sometimes difficult to force facts to conform to one’s theories, such as in situations where there are many anomalies, a strong temptation exists to ignore facts that don’t agree with those theories. The desire to earn respect from one’s peers (and, ideally, to become eminent) has, from the earliest days of science, brought with it a temptation to consciously distort, ignore evidence, play loose with the facts, and even lie.20
Ignoring failures
Owing to the fact that scientific communication is primarily through the printed medium, there exists a tendency to record only the work of those few persons who have successfully contributed to supporting a theory in science, and to ignore the many non-significant findings.37 Significantly, it is common that researchers, both deliberately and subconsciously, tout the facts that support their theory, modify those that do not quite support it, and ignore those that contradict it. Often, though, the fraud is more deliberate. The case of Dr Glueck is one such example:
‘Only one month after the NIMH [National Institute of Mental Health] announced its verdict in the Breuning investigation, the medical community was shaken by yet another scandal. For 22 years internist Charles Glueck had risen steadily through the hierarchy of science. Since graduating from medical school in 1964, he had published nearly 400 papers at the furious rate of close to 17 a year. For his leading-edge research on cholesterol and heart disease Glueck had won the University of Cincinnati’s prestigious Rieveschl Award in 1980. As head of the lipid unit and the General Clinical Research Center at the university, Glueck was one of the most powerful and heavily funded scientists on staff. But last July the National Institutes of Health found that a paper of Glueck’s published in the August 1986 issue of the journal Pediatrics was riddled with inconsistencies and errors. As written, the NIH explained, the paper was utterly shoddy science, its conclusions empty.’38
One wonders how Glueck got a paper ‘riddled with inconsistencies and errors’ past the peer reviews.
The peer review for grant funding results in individuals who determine which applicant is awarded research moneys also having a major influence in what research is done. In-vogue research is funded, and research that has implications that contradict a prevailing scientific belief structure, such as Darwinism, is less apt to be funded. Dalton noted that despite the widely acknowledged problems of peer review:
‘no serious alternative has yet been proposed. “It is easy to say the system is flawed; it is harder to say how to improve it”, says Ronald McKay, a stem-cell researcher at the National Institute of Neurological Disorders and Stroke in Bethesda Maryland. One tweak to the process—asking reviewers to sign their reviews—has been experimented with. The idea is that, if reviewers are obliged to identify themselves, it will improve transparency and discourage anyone who might be tempted to abuse the process under the cloak of anonymity. Rennie is a particular enthusiast for this approach. “This is the only credible, worthwhile, transparent and honest system”, he says. “I’ve made that passionate plea, but the majority hasn’t gone along with it”.’39
There exist ‘lots of flaws in the publishing system’ largely because ‘peer review doesn’t guarantee quality’.40 Some ways to reduce the problems include publishing the names of the reviewers and giving them credit as well. Another is publishing clear and strict acceptance policies, and if a paper does not meet these, it is allowed to be revised until it does.
Is science self-correcting?
The assumption that science is self-correcting was evaluated in a study by the Food and Drug Administration. The study concluded that the Breuning case discussed above was
‘just the tip of the fraud and misconduct iceberg. Investigators at the FDA run across so much shoddy research that they have quippy terms like “Dr. Schlockmeister” for a bad scientist, and “graphite statistics” for data that flow from the tip of a pencil. Every year, as a quality-control measure, the FDA conducts investigations of key studies of researchers involved in getting new drugs to the agency for approval. “This is the last stop for drugs before they go public”, explains Alan Lisook, who heads the FDA investigations. “You’d think we’d get some of the cleanest science around.” But in 1986, when he analyzed the investigations of the previous ten years, Lisook compiled some shocking numbers. Nearly 200 studies contained so many flaws that the efficacy of the drug itself could be called into question. Some 40 studies exhibited not simple oversights but recklessness or outright fraud. In those ten years the FDA banned more than 60 scientists from testing experimental drugs, after finding that they had falsified data or engaged in inept research. As Sprague says, “something is clearly not working”.’41
The claims about peer review are a myth, and as a result, ‘much of what is published goes unchallenged, may be untrue, and probably nobody knows or even cares’.42 Anderson evaluated attempts to defend the technique, such as editor-in-chief of Science Donald Kennedy’s view that ‘peer review has never been expected to detect scientific fraud’. Kennedy concluded that this defense may be partly valid, but the anomalies in some fraudulent papers published in Science and Nature were hardly very subtle. An example he gave was the case of Jan Hendrik Schön. For example in one paper, Schön:
‘used the same curve to represent the behaviours of different materials, and in another he presented results that had no errors whatsoever. Both journals stress that papers are chosen on technical merit and reviewers for their technical skills. Should not the manuscript editors or reviewers have remarked on these discrepancies? These papers were, after all, making claims of huge importance to industry and academia. Ultimately, Schön was unmasked by scientists not engaged in formal peer review.’43
The fact is ‘science has its pathogenic side’ for reasons that include a ‘lust for power’ and ‘greed’ that
‘can infect scientists as well as anyone else. Anyone who has worked in the laboratory, on a university campus, or read the history of science is well aware of the overweening pride, jealousy and competition that can infect those working in the same field. In the effort to “succeed”, some scientists have “cooked” their data; that is, they have adjusted the actual results to fit what they were supposed to get.’44
The major problem with fraud is that of science itself, namely that scientists ‘see their own profession in terms of the powerfully appealing ideal that the philosophers and sociologists have constructed. Like all believers they tend to interpret what they see of the world in terms of what the faith says is there.’45 And, unfortunately, science is a ‘complex process in which the observer can see almost anything he wants provided he narrows his vision sufficiently’.46 An example of this problem is James Randi’s conclusion that scientists are among the easiest of persons to fool with magic tricks.47 The problem of objectivity is very serious because most researchers believe passionately in their work and the theories they are trying to prove. While this passion may enable the scientist to sustain the effort necessary to produce results, it may also colour and even distort those results.
Many examples exist to support the conclusion that researchers’ propensity for self-delusion is particularly strong, especially when examining ideas and data that impugn on their core belief structure. The fact is ‘all human observers, however well trained, have a strong tendency to see what they expect to see’.48 Nowhere is this more evident than in the admittedly highly emotional area of evolution.
The effect of experimental perceptions on the part of the researchers was studied by Robert Rosenthal in a now-classic set of experiments.49 In one of these experiments, Rosenthal asked researchers to test what he said were ‘maze bright’ and ‘maze dull’ rats. The rats were actually randomly divided into the two groups and none was specially trained. The ‘maze bright’ rats were then ‘rated’ as superior by researchers when, in fact, they were not. The experimenters saw what they wanted (or expected, thus the phenomenon is now called the ‘expectancy effect’), perhaps unconsciously; the researchers may have pressed the stopwatch button a fraction of a second too early for the ‘maze bright’ rats and a fraction of a second later for the ‘maze dull’ rats. Other similar experiments have produced similar results.
Use of science as a bullying tactic
One method of discrediting unpopular theories, especially those involving biological origins, is to label them ‘non-science’ and the competing theories ‘science’. Sociologists have for years explored the pernicious effects of labelling via dichotomizing concepts. This method then places a broad positive term on one half of the artificial dichotomy, and a broad negative term on the other half. The appropriate response to any science controversy is to argue each proposition solely on its merits, using only the tools of science.
In their exploration of fraud in science, Broad and Wade conclude that the term ‘science’ is often a label used to imply that something is true or false. In their words, the conventional wisdom concludes that:
‘science is a strictly logical process, objectivity is the essence of the scientist’s attitude to his work, and scientific claims are rigorously checked by peer scrutiny and the replication of experiments. From this self-verifying system, error of all sorts is speedily and inexorably cast out.’50
The authors then show why this common belief about science is false. The result of their investigation can help us to understand the activity of science from a far more realistic standpoint than is common today. They demonstrate that the supposedly ‘fail-safe’ mechanisms of scientific inquiry often do not correct the frauds that they claim have become ‘epidemic’ in modern science today. The idea of being ‘first’, the need to obtain research grants, trips to exotic places for conferences, and the lure of money and prestige, lead many scientists to abandon any lofty ideals they may have once had as a neophyte scientist.
Conclusions
The published literature, and the interviews I have carried out at the faculty of a medical school, consistently confirm the problem of fraud in science today. The reasons for this include money, tenure, promotions, grant renewal concerns, professional rivalry, and the need to prove one’s theories and ideas. Another factor is the rejection of Christianity and moral absolutes which has resulted in a collapse of the moral foundation that is critical in controlling fraud. Fraud is especially a problem in the fields attempting to support Darwinism, and in this field it tends to take a long time to root out. Hundreds of well-documented cases of fraud have been discussed in the literature.9,13,20,51 Unfortunately, save replication (which is uncommon in many fields), fraud in science is difficult to detect. Usually, laboratory assistants and colleagues are the ones who uncover fraud, and they are often unwilling to report it,9 because doing so could cost them friends, tarnish their reputation, and result in retaliation. Roman claims that for these reasons, snitchers are ‘rare’.9
As a result, fraud in science is considered by many to be endemic.20 Biological research is one of the chief areas of concern. Some conclude that over 10% of all researchers in this area are less than honest. Indeed, probably most researchers have quoted data that are fraudulent, or at least inaccurate. Few extensive research investigations on fraud under the present system exist (and the cases unearthed probably represent only the tip of the proverbial iceberg).
Related Articles
References
- Miller, R., The Piltdown Men, St. Martins Press, New York, 1972. Return to text.
- Bergman, J., Ancon sheep: just another loss mutation, TJ 17(1):18–19, 2002. Return to text.
- Simons, L.M., Archaeoraptor fossil trail, National Geographic 198(4):128–132, 2000. Return to text.
- Hooper, J., An Evolutionary Tale of Moths and Men: The Untold Story of Science and the Peppered Moth, W.W. Norton, New York, 2002. Return to text.
- Wells, J., Haeckel’s embryos and evolution, The American Biology Teacher 61(5):345–349, 1999. Return to text.
- Koestler, A., The Case of the Midwife Toad, Random House, New York, 1972. Return to text.
- Pennisi, E., Haeckel’s embryos: fraud rediscovered, Science 277:1435, 1997. Return to text.
- Assmuth, J. and Hull, E.R., Haeckel’s Frauds and Forgeries, Examiner Press, Bombay and Kenedy, London, 1915. Return to text.
- Roman, M., When good scientists turn bad, Discover 9(4):50–58; 1986; p. 58. Return to text.
- Abbott, A., Science comes to terms with the lessons of fraud, Nature 398:13–17, 1999; p. 13. Return to text.
- Campbell, P., Reflections on scientific fraud, Nature 419:417, 2002. Return to text.
- Check, E., Sitting in judgment, Nature 419:332–333, 2002; p. 332. Return to text.
- Kohn, A., False Prophets: Fraud and Error in Science and Medicine, Barnes & Noble Books, New York, 1988. Return to text.
- Crewdson, J., Science Fictions; A Massive Cover-Up and the Dark Legacy of Robert Gallo, Little Brown, New York, 2002. Return to text.
- Roman, ref. 9, p. 52. Return to text.
- Dennis, C., Misconduct row fuels calls for reform, Nature 427:666, 2004. Return to text.
- Kohn, ref. 13, pp. 104–110. Return to text.
- Campbell, ref. 11, p. 417. Return to text.
- Kerwin, L., Obituary: Franco Rasetti (1901–2001), Nature 415:597, 2002. Return to text.
- Broad, W. and Wade. N., Betrayers of the Truth: Fraud and Deceit in the Halls of Science, Simon and Schuster, New York, p. 8, 1982. Return to text.
- Roman, ref. 9, p. 53. Return to text.
- Anonymous, Is science really a pack of lies? Nature 303:361–362, 1981; p. 361. Return to text.
- Dewitt, N. and Turner, R., Bad peer reviewers, Nature 413(6852):93, 2001. Return to text.
- Dalton, R., Peers under pressure, Nature 413:102–104, 2001; p. 104. Return to text.
- Abbott, A. and Schwarz, H., Dubious data remain in print two years after misconduct inquiry, Nature 418:113, 2002. Return to text.
- Broad and Wade, ref. 20, p. 17. Return to text.
- Kohn, ref. 13, p. 47. Return to text.
- Chang, K., On scientific fakery and the systems to catch it, The New York Times Science Times, 15 October 2002; pp. 1, 4. Return to text.
- Simons, ref. 3, p. 130. Return to text.
- Vogel, G., Proffitt, F. and Stone, R., Ecologists roiled by misconduct case, Science 303:606–609, 2004; p. 606. Return to text.
- Abbott and Schwarz, ref. 25, p. 113. Return to text.
- Stewart, W.W. and Feder, N., The integrity of the scientific literature, Nature 325:207–216, 1987. Return to text.
- Stewart and Feder, ref. 32, p. 208. Return to text.
- Struhl, G., Cell 116:481, 2004. Return to text.
- Dalton, ref. 24, p. 104. Return to text.
- Broad and Wade, ref. 20, p. 19. Return to text.
- Broad and Wade, ref. 20, p. 35. Return to text.
- Roman, ref. 9, p. 57. Return to text.
- Dalton, ref. 24, p. 103. Return to text.
- Muir, H., Twins raise ruckus, New Scientist 176(2369):6, 2002. Return to text.
- Roman, ref. 9, p. 55. Return to text.
- Kohn, ref. 13, p. 205. Return to text.
- Kennedy, D., More questions about research misconduct, Science 297:13, 2002. Return to text.
- Zabilka, I.L., Scientific Malpractice; The Creation/Evolution Debate, Bristol Books, Lexington, p. 138, 1992. Return to text.
- Broad and Wade, ref. 20, p. 79. Return to text.
- Broad and Wade, ref. 20, pp. 217–218. Return to text.
- Randi, J., Flim Flam! Prometheus, Buffalo, 1982. Return to text.
- Broad and Wade, ref. 20, p. 114. Return to text.
- Rosenthal, R., Experimenter Effects in Behavioral Research, Irvington, New York, pp. 150–164, 1976. Return to text.
- Broad and Wade, ref. 20, p. 7. Return to text.
- Adler, I., Stories of Hoaxes in the Name of Science, Collier Books, New York, 1962. Return to text.
No comments:
Post a Comment