Skip to main content

Leprechaun Hunting & Citogenesis

Many claims, about history in particular, turn out to be false when traced back to their origins, and form kinds of academic urban legends. These “leprechauns” are particularly pernicious because they are often widely-repeated due to their growing apparent trustworthiness, yet difficult to research & debunk due to the difficulty of following deeply-nested chains of citations through ever more obscure sources. This page lists instances I have run into.

A major source of leprechaun transmission is the frequency with which researchers do not read the papers they cite: because they do not read them, they repeat misstatements or add their own errors, further transforming the leprechaun and adding another link in the chain to anyone seeking the original source. This can be quantified by checking statements against the original paper, and examining the spread of typos in citations: someone reading the original will fix a typo in the usual citation, or is unlikely to make the same typo, and so will not repeat it. Both methods indicate high rates of non-reading, explaining how leprechauns can propagate so easily.

Leprechaun Hunting and Historical Context

In trying to chase down references to obtain their fulltext and the original primary sources (so much easier in the era of search engines), I sometimes wind up discovering that the claims as stated are blatantly false, such as Kepler’s portrait, and the end product of a long memetic evolution (often politically-biased or Whiggish, and sometimes with devastating consequences). These urban legends or academic myths were dubbed “leprechauns” by Laurent Bossavit in his book The Leprechauns of Software Engineering: How folklore turns into fact and what to do about it, because in tracing well-known claims about programming, at the end of the rainbow of a clear useful important claim repeated ad nauseum for decades, one often discovers that the basis for the claim is foolsgold, vanishing the next morning like a leprechaun’s pot of gold—the original source was a terrible experiment, an anecdote, completely irrelevant, or even outright fictional. (Not to be confused with Replication Crisis-style issues, where claims disappear all the time, but because they were based on misleading data/analyses or deliberate fraud; with leprechauns and urban legends, it’s more the cumulative effect of carelessness and ‘game of telephone’ effects, possibly with some bias as the seed of a dubious outlier claim which is amplified due to its memetic properties.)

Leprechaun Examples

A list of examples of claims I have had the misfortune to spend time looking into which, on closer investigation, turned out to vanish with the dew:

  • Most discussions of Thomas Robert Malthus are erroneous and show the speaker has not actually read An Essay.

  • Supposedly a man with hydrocephalus destroying >90% of his brain graduated with a math degree; this can’t be directly shown to be false, but it traces back to popular articles, and real research on this anecdotal case was, suspiciously, never published; the weight of the evidence about it and contrasting with other hydrocephalus cases (some affected by research fraud) strongly suggests error or omission of damaging details.

  • Drapetomania is cited as an example of the Antebellum South’s medicalization of slaves the better to oppress them, ignoring the fact that it was supported by only its inventor, was mocked, had no practical consequences, and was of less importance to its time than Time Cube is to our own.

  • The British science writer Dionysius Lardner supposedly scoffed at the idea of fast trains, claiming “Rail travel at high speed is not possible because passengers, unable to breathe, would die of asphyxia”; but there is no good source that he ever said that, and it seems to have been made up out of whole cloth in 198044ya by someone who couldn’t spell his first name right.

  • Bicycle face was claimed by an encyclopedia and a few other feminist books to be a disorder pushed by the English medical establishment to discourage women from bicycling & keep them under control; but the scanty primary sources barely supported its existence as an obscure concept known from a few newspaper columns, and certainly not the misogynist tool of oppression it was depicted as. (There appear to be similar problems with Rachel Maines’s claims about Victorian doctors’ use of vibrators, in that the sources simply do not support her claims: “A Failure of Academic Quality Control: The Technology of Orgasm”, Lieberman & Schatzberg2018.)

  • A feminist wrote that ‘This was a time before women had the right to vote. If they did attend college at all, it was at the risk of contracting “neuralgia, uterine disease, hysteria, and other derangements of the nervous system” (according to Harvard gynecologist Edward H. Clarke)’; this was a grossly out-of-context quote which libeled a man with noble & progressive beliefs, as I pointed out in my comment.

  • There are many attributions to the great physicist Lord Kelvin of a line which runs “X-rays are probably a hoax” or “X-rays are frauds” or somesuch; a closer investigation shows that there are no primary quotations, and that the real context seems to have been his reaction to sensationalized newspaper articles on the discovery of X-rays and that in any case, he accepted X-rays as soon as he read the scientific paper describing their discovery. (He probably also did not say “Radio has no future.”)

  • We all know spinach has lots of iron—or does it not, or does it not not have lots of iron?. Let’s go deeper:

    Hamblin1981 reveals that the widely-held belief that Popeye eats spinach because it contains lots of iron is a myth, and spinach has normal iron amounts; a myth ultimately caused by sloppy German chemists typoing a decimal point and uncritically repeated since then, as an example of leprechauns/urban legends/errors in science…

    Which Sutton2010 traces the versions of, ultimately finding the spinach myth to be a myth and no decimal point involved at all at any point, and the myth coming to Hamblin, as Hamblin agrees, via Reader’s Digest

    Except Rekdal2014 points out that the story was indeed published in Reader’s Digest—but 8 years afterward…

    And Joachim Dagg in 2015 finds a decimal point error elsewhere for the iron content of beans, which was debunked by a Bender, and the debunking passed onto Hamlin with a confusion into spinach…

    But Sutton, in 2018, accuses Dagg and another of being obsessive cyberstalkers out to discredit Sutton’s work—proposing that Darwin plagiarized evolution, a revelation covered up by “Darwin cultists”—and that Dagg’s interest in the spinach myth-myth is merely part of an epic multi-year harassment campaign:

    Meanwhile, in 2018, Dagg, who like Derry cyberstalks me obsessively around the Internet eg. posting obsessive juvenile comments on the Amazon book reviews that I write etc (eg. here), writes in the Linnean Society paper in which he jealously plagiarises what he proves in his own words he prior knew (eg. in 2014 and later here) to be my original Big Data IDD “Selby cited Matthew” discovery, thanks the malicious and jealous intimidating cyberstalker Derry and his friend Mike Weale. Notably, Weale cited my original (Sutton2014) (Selby and six other naturalists cited Matthew pre-1858) prior-published peer reviewed journal bombshell discovery in his 2015 Linnean Society paper and openly thanks me for assisting him with that paper. He also thanks Dagg in the same paper. As further proof of his absolute weird obsession with me, Dagg (here) also jealously retraces all my prior-published steps in my original and now world-famous spinach, decimal point error supermyth bust. I think he was trying—but once gain failing (here obsessing most desperately about me and my research once again)—to discredit me anyway he could, which is the usual behavior of obsessed stalking cultists, unable to deal with the verifiable new cult-busting facts they despise, so going desperately after the reputation of their discoverer instead. His Linnean Society Journal friend Derry is totally obsessed with me for the very same reason. He too, for apparently the very same reason, tries but also fails to discredit the spinach supermythbust on his desperate pseudo-scholarly obsessive stalker site (here). What a pair of jealous and obsessive sad clowns they are.

    So, can we really trust Dagg or Sutton…?

  • plagiarist Johann Hari claims that prohibitionist Harry J. Anslinger surveyed 30 scientific experts about the safety of marijuana and ignored the 29 telling him it was safe and based his anti-marijuana campaign on that 1 scientist; each point there is false.

    In actuality after checking Hari’s sources, Anslinger did not survey them but they had been debating the ban proposal internally & the AMA provided Anslinger excerpts of their opinions, they were generally not eminent scientists but pharmacists & drug industry representatives, the holdout did not say it was dangerous but merely described a doctor of his acquaintance who had been severely addicted to marijuana & explicitly noted that “This may be an exceptional case”, and Anslinger didn’t base his campaign on it—or even mention it publicly—although he did save the exception to the Bureau files on marijuana (which is where Hari found it).

  • In his book Fragments of an Anarchist Anthropology, David Graeber claims Nazi rallies were “inspired by” Harvard pep rallies, but without any sources; I investigated this in more depth and concluded that the connection was real but far more tenuous than Graeber’s summary. (Graeber’s books make an extraordinary number of false claims.)

  • The Wikipedia article on shampoo cited popular science writer Mary Roach as summarizing NASA & Soviet research as indicating shampoo is necessary, while the relevant passage seems to say the opposite.

  • CS theoretician Edsger Dijkstra is known for a quote that “Computer Science is no more about computers than astronomy is about telescopes”, but it’s unclear he ever said it and it may have actually been said by either Hal Abelson or one of 3 obscure writers.

  • A famous quote by Oliver Heaviside turns out to be stitched together from no less than 4 different sources (3 different places in 2 Heaviside books, and a later commentator describing Heaviside’s philosophy of science).

  • The “Lindy effect” is the claim that a large volume of output predicts a lower probability of terminating soon & more future output eg. writing novels, as happens under certain statistical distributions, which was credited as originating in a 1964 The New Republic magazine article by Albert Goldman; obtaining a copy, however, I learn that Goldman’s actual observation of “Lindy’s Law” was that comedians appeared to have fixed amounts of material, and so the more output from a comedian, the more likely his TV career is about to terminate—that is, the opposite of the “Lindy effect” as defined by Nicholas Taleb.

  • AI researchers like to tell the cautionary story of a neural network learning not to recognize tanks but time of day which happened to correlate with tank type in that set of photographs; unsurprisingly, this probably did not happen.

    There appear to be several similar AI-related leprechauns: the infamous Microsoft Tay bot, which was supposedly educated by 4chan into being evil, appears to have been mostly a simple ‘echo’ function (common in chatbots or IRC bots) and the non-“repeat after me” Tay texts are generally short, generic, and cherrypicked out of tens or hundreds of thousands of responses, and it’s highly unclear if Tay ‘learned’ anything at all in the short time that it was operational; a “racist photo cropping” algorithm on Twitter caused a ruckus in 2020 with people cherrypicking examples of ‘bias’ using pairs of photos where the black person was badly cropped, but Twitter did not confirm this, stated their testing had specifically checked for that, and more extensive testing with up to 100 pairs showed roughly 50:50 crops (reminiscent of the ‘gorilla’ Google photo classification that people declined to note classified white people as ‘seals’); an Amazon hiring algorithm was biased against women and in favor of lacrosse-playing men, except said algorithm was never used, and research on it stopped because it was exhibiting chance-level performance; Cambridge Analytica’s political ads supposedly powered by AI was just a giant scam and could not possibly have had the effects attributed to it, both because advertising has absolutely tiny effects even with vastly more comprehensive datasets than Cambridge Analytica had access to and because the Trump campaign fired Cambridge Analytica early on.

  • A more contemporary example comes courtesy of Mt. Gox: everyone ‘knew’ it was started to be an exchange for trading Magic: the Gathering cards, until I observed that my thorough online research turned up no hard evidence of it but rather endless Chinese whispers; the truth, as revealed by founder Jed McCaleb turned out to be rather stranger.

  • researching “Laws of Tech: Commoditize Your Complement”, I learned that Netscape founder Marc Andreessen’s infamous boast that web browsers would destroy the Microsoft Windows OS monopoly by reducing Windows to a “poorly debugged set of device drivers” is ascribed by Andreessen to Robert Metcalfe

  • More minorly, I’ve corrected a New York Times movie review & an ars technica computer crime article.

  • “Littlewood’s Law of Miracles” appears to have not been by Littlewood but Freeman Dyson

  • Richard Feynman’s anecdote about “Mr Young” & methodological errors in psychology studies of rats turns out to be mostly right but got details wrong, making it especially hard to find the original

  • Carthage was not sown with salt

  • There is no edible honey in Egyptian tombs

  • Walpole quote: a popular false quote, attributed to writer & politician Horace Walpole (1717801797227ya), runs

    The whole secret of life is to be interested in one thing profoundly and a thousand other things well.

    This is a stringent demand, in requiring one to know “well” not just one thing, but a thousand things. Can even an industrious intellectual live up to this standard?

    After seeing it quoted in a The Browser newsletter, I was immediately suspicious it was apocryphal as this does not sound like an English writer who died in 1797227ya—but like a much later writer (or at least, like a heavily-distorted version of a 1700s original).

    Searching eventually led me to Wikiquote which informed me that it was actually written by Hugh Walpole, a noted but now obscure novelist (who died at the more appropriate date of 194183ya), in an obscure book fortunately available on the Internet Archive. But I could not find it there. Wikiquote noted, oddly, a second publication, to some educational journal I’ve never heard of. (Walpole had taught earlier, but had turned to fulltime writing by this point, so his inclusion is a little odd.)

    While not available on IA that I found, it was available through Hathitrust, where at last I could verify on page 342 the quote, which read there (emphasis added):

    The whole secret of life is to be interested in one thing profoundly and a thousand other things as well.

    This changes the meaning! The real Walpole quote is not exhorting us to impossibly master an endless array of topics, but instead, to maintain a lively curiosity & flexibility.

    So, the quote turned out to be: misattributed to the wrong person 2 centuries prior, to the wrong book, and quoted wrongly, in a way which changed the meaning substantially.

Citogenesis: How Often Do Researchers Not Read The Papers They Cite?

One fertile source of leprechauns seems to be the observation that researchers do not read many of the papers that they cite in their own papers. The frequency of this can be inferred from pre-digital papers, based on bibliographic errors: if a citation has mistakes in it, such that one could not have actually looked up the paper in a library or database, and those mistakes were copied from another paper, then the authors almost certainly did not read the paper (otherwise they would have fixed the mistakes when they found them out the hard way) and simply copied the citation.

The empirically-measured spread of bibliographic errors suggest that researchers frequently do not read the papers they cite. The frequency can be further confirmed by examining citations to see when the citers makes much more serious errors by misdescribing the original paper’s findings; the frequency of such “quotation errors” is also high, showing that the errors involved in citation malpractice are substantial and not merely bibliographic.

In reading papers and checking citations (often while hunting leprechauns or tracing epigraphs), one quickly realizes that not every author is diligent about providing correct citation data, or even reading the things they cite; not too infrequently, a citation is far less impressive than it sounds when described, or even, once you read the original, actually shows the opposite of what it is cited for. Claims, phrases, and numbers propagate, typically with their complexity gradually being worn away and turned into a catchy meme. This process will be extremely familiar to anyone factchecking stuff on social media. This helps myths propagate and makes claims seem far better supported than they really are. Since errors tend to be in the direction of impressive or cool or counterintuitive claims, this process and other systemic biases preferentially select for wrong claims (particularly politically convenient ones or extreme ones). As always, there is no substitute for demanding & finding fulltext and reading the original source for a claim rather than derivative ones.

How often do authors not read their cites? One way to check is to look at suspiciously high citation rates of difficult-to-access things; if a thesis or book is not available online or is not available in many libraries, but it has racked up hundreds or thousands of citations, is it likely that so many time-pressed lazy academics took the time to interlibrary loan it from one of the only holding libraries rather than simply cargo-culting a citation? For example, David McClelland, one of the most cited psychologists of the 20th century & critic of standardized testing such as IQ, self-published through his consulting company a number of books1, which he cites in highly-popular articles of his (eg. McClelland1973/McClelland & Boyatzis1980/McClelland1994); several of these books have since racked up hundreds of citations, and yet, have never been republished, cannot be found anywhere online in Amazon / Google Books / Libgen / used book sellers, and do not even appear in WorldCat (!) which suggests that no libraries have copies of them—one rather wonders how all of these citers managed to obtain copies to read… But individual anecdotes, however striking, don’t provide an overall answer; perhaps “Achievement Motivation Theory” fans are sloppy (Barrett & Depinet1991/Barrett1994/Barrett et al 2003 notes that if you actually read the books, McClelland’s methods clearly don’t work), but that doesn’t mean all researchers are sloppy.

This might seem near-impossible to answer, but bibliographic analysis offers a cute trick. In olden times, citations and bibliographies had to be compiled by hand; this is an error-prone process, but one may make a different error from another author citing the same paper, and one might correct any error on reading the original. On the other hand, if you cite a paper because you blindly copied the citation from another paper and never get around to reading it, you may introduce additional errors but you definitely won’t fix any error in what you copied. So one can get an idea of how frequent non-reads are by tracing lineages of bibliographic errors: the more people copy around the same wrong version of a citation (out of the total set of citations for that cite), the fewer of them must be actually reading it.

Such copied errors turn out to be quite common and represent a large fraction of citations, and thus suggests that many paper are being cited without being read. (This would explain not only why retracted studies keep getting cited by new authors, but also the prevalence of misquotation/misrepresentation of research, and why leprechauns persist so long.) Simkin & Roychowdhury venture a guess that as many as 80% of authors citing a paper have not actually read the original (which I feel is too high but I also can’t strongly argue with given how often I see quote errors or omissions when I check cites). From “Citation Analysis”, Nicolaisen 200717ya:

Garfield (199034ya, p. 40) reviewed a number of studies dealing with bibliographic errors and concluded, that “to err bibliographically is human.” For instance, in a study of the incidence and variety of bibliographic errors in six medical journals, De Lacey, Record, and Wade (198539ya) found that almost a quarter of the references contained at least one mistake and 8 percent of these were judged serious enough to prevent retrieval of the article. Moed & Vriens1989 examined discrepancies between 4,500 papers from five scientific journals and approximately 25,000 articles that cited these papers, finding that almost 10 percent of the citations in the cited reference dataset showed a discrepancy in either the title, the author name, or the page number. They concluded that one cause for the multiplication of errors seemed to be authors’ copying of erroneous references from other articles. Broadus (198341ya) came to the same conclusion in a study of a 197549ya textbook on sociobiology that included among its references an erroneous reference to a 196460ya article (one word was incorrectly substituted in the title). By examining 148 subsequent papers that cited both the book and the article, Broadus could see how many authors repeated the book’s mistaken reference. He found that 23 percent of the citing authors also listed the faulty title. A similar study by Simkin & Roychowdhury2003 reported an almost 80-percent repetition of misprints.

One might hope that with modern technology like search engines and Libgen, this problem would be lessened since it is so much easier to access fulltext and bibliographic errors are so much less important when no one is actually looking up papers by page numbers in a row of bound volumes, but I suspect that if this was redone, the error rate would go down regardless of any improvements in reading rates, simply because researchers now can use tools like Zotero or Crossref to automatically retrieve bibliographic data, so the true non-reading rate simply becomes masked. And while fulltext is easier to read now, academic pressures are even stronger now, and volumes of publications have only accelerated since the citation data in all of these studies, making it even more difficult for a researcher to read everything they know they should. So while these figures may be outdated, they may not be obsolete as all that.

(And myself? Well, I can honestly say that I do not link any paper on Gwern.net without having read it; however, I have read most but not all papers I host, and I have not read most of the books I host or sometimes cite—it just takes too much time to read entire books.)

Bibliography

Individual papers:

  • “An investigation of the validity of bibliographic citations”, Broadus 198341ya:

    Edward O. Wilson, in his famous work, Sociobiology, The New Synthesis [9], makes reference to a pair of articles by W. D. Hamilton, but misquotes the articles’ title. No less than 148 later papers make reference to both Wilson’s book and Hamilton’s articles, by title. Thus, there is provided an opportunity to test the charge, made by some critics, that writers frequently lift their bibliographic references from other publications without consulting the original sources. Although 23% of these citing papers made the same error as did Wilson, a further perusal of the evidence raises considerable doubt as to whether fraudulent use was intended.

    (By ‘fraudulent use’, Broadus seems to mean that authors did not seem to broadly copy references indiscriminately in “wholesale borrowing” to pad out their bibliography, eg. authors who copied the erroneous citation could have, but generally didn’t, copy citations to a bunch of other Hamilton articles. He doesn’t try to argue that they all read the original Hamilton paper despite their copying of the error.)

  • “Possible inaccuracies occurring in citation analysis”, Moed & Vriens 198935ya:

    Citation analysis of scientific articles constitutes an important tool in quantitative studies of science and technology. Moreover, citation indexes are used frequently in searches for relevant scientific documents. In this article we focus on the issue of reliability of citation analysis. How accurate are citation counts to individual scientific articles? What pitfalls might occur in the process of data collection? To what extent do ‘random’ or ‘systematic’ errors affect the results of the citation analysis? We present a detailed analysis of discrepancies between target articles and cited references with respect to author names, publication year, volume number, and starting page number. Our data consist of some 4500 target articles published in five scientific journals, and 25000 citations to these articles. Both target and citation data were obtained from the Science Citation Index, produced by the Institute for Scientific Information. It appears that in many cases a specific error in a citation to a particular target article occurs in more than one citing publication. We present evidence that authors in compiling reference lists, may copy references from reference lists in other articles, and that this may be one of the mechanisms underlying this phenomenon of multiple’ variations/errors.

  • “Read before you cite!”, Simkin & Roychowdhury 200222ya (further discussion: Simkin & Roychowdhury2006):

    We report a method of estimating what percentage of people who cited a paper had actually read it. The method is based on a stochastic modeling of the citation process that explains empirical studies of misprint distributions in citations (which we show follows a Zipf law). Our estimate is only about 20% of citers read the original…In principle, one can argue that an author might copy a citation from an unreliable reference list, but still read the paper. A modest reflection would convince one that this is relatively rare, and cannot apply to the majority. Surely, in the pre-internet era it took almost equal effort to copy a reference as to type in one’s own based on the original, thus providing little incentive to copy if someone has indeed read, or at the very least has procured access to the original. Moreover, if someone accesses the original by tracing it from the reference list of a paper with a misprint, then with a high likelihood, the misprint has been identified and will not be propagated. In the past decade with the advent of the Internet, the ease with which would-be non-readers can copy from unreliable sources, as well as would-be readers can access the original has become equally convenient, but there is no increased incentive for those who read the original to also make verbatim copies, especially from unreliable resources2.

  • “Stochastic modeling of citation slips”, Simkin & Roychowdhury 200420ya:

    We present empirical data on frequency and pattern of misprints in citations to twelve high-profile papers. We find that the distribution of misprints, ranked by frequency of their repetition, follows Zipf’s law. We propose a stochastic model of citation process, which explains these findings, and leads to the conclusion that 70-90% of scientific citations are copied from the lists of references used in other papers.

    (Simkin & Roychowdhury have some other papers which don’t seem to do further empirical work on the non-reading question: “Copied citations create renowned papers?”, 200321ya; “A mathematical theory of citing”, “An introduction to the theory of citing”, 200717ya; “Theory of Citing”, 201113ya.)

  • “Avoid “Laundry List” Citations I”, Keogh 200915ya: a long presentation which mentions the author’s experience with seeing a citation typo he made get copied by dozens of subsequent papers:

    …In other cases I have seen papers that claim “we introduce a novel algorithm X”, when in fact an essentially identical algorithm appears in one of the papers they have referenced (but probably not read).

  • “Avoiding erroneous citations in ecological research: read before you apply”, Šigut et al 2017:

    The Shannon-Wiener index is a popular nonparametric metric widely used in ecological research as a measure of species diversity. We used the Web of Science database to examine cases where papers published 1990252015 mislabeled this index. We provide detailed insights into causes potentially affecting use of the wrong name ‘Weaver’ instead of the correct ‘Wiener’. Basic science serves as a fundamental information source for applied research, so we emphasize the effect of the type of research (applied or basic) on the incidence of the error. Biological research, especially applied studies, increasingly uses indices, even though some researchers have strongly criticized their use. Applied research papers had a higher frequency of the wrong index name than did basic research papers. The mislabeling frequency decreased in both categories over the 25-year period, although the decrease lagged in applied research. Moreover, the index use and mistake proportion differed by region and authors’ countries of origin. Our study also provides insight into citation culture, and results suggest that almost 50% of authors have not actually read their cited sources. Applied research scientists in particular should be more cautious during manuscript preparation, carefully select sources from basic research, and read theoretical background articles before they apply the theories to their research. Moreover, theoretical ecologists should liaise with applied researchers and present their research for the broader scientific community. Researchers should point out known, often-repeated errors and phenomena not only in specialized books and journals but also in widely used and fundamental literature.

Miscitation

A few papers I found on the way, which touch on the much more serious question of how often a citation is correctly described/interpreted (as opposed to merely having bibliographic errors suggesting it may not have been read at all):

  • “How accurate are quotations and references in medical journals?”, de Lacey et al 1985

    The accuracy of quotations and references in six medical journals published during January 198440ya was assessed. The original author was misquoted in 15% of all references, and most of the errors would have misled readers. Errors in citation of references occurred in 24%, of which 8% were major errors—that is, they prevented immediate identification of the source of the reference. Inaccurate quotations and citations are displeasing for the original author, misleading for the reader, and mean that untruths become “accepted fact.” …

  • “Do Authors Check Their References? A Survey of Accuracy of References in Three Public Health Journals”, Eichorn & Yankauer 198737ya:

    We verified a random sample of 50 references in the May 198638ya issue of each of three public health journals. Thirty-one percent of the 150 references had citation errors, one out of 10 being a major error (reference not locatable). Thirty percent of the references differed from authors’ use of them with half being a major error (cited paper not related to author’s contention).

  • “Accuracy of references in psychiatric literature: a survey of three journals”, Lawson & Fosker 199925ya:

    Aims and method: The prevalence of errors in reference citations and use in the psychiatric literature has not been reported as it has in other scientific literature. Fifty references randomly selected from each of three psychiatric journals were examined for accuracy and appropriateness of use by validating them against the original sources.

    Results: A high prevalence of errors was found, the most common being minor errors in the accuracy of citations. Major citation errors, delayed access to two original articles and three could not be traced. Eight of the references had major errors with the appropriateness of use of their quotations.

    Clinical implications: Errors in accuracy of references impair the processes of research and evidence-based medicine, quotation errors could mislead clinicians into making wrong treatment decisions.

  • “Secondary and Tertiary Citing: A Study of Referencing Behavior in the Literature of Citation Analysis Deriving from the Ortega Hypothesis of Cole and Cole”, Hoerman & Nowicke 199529ya:

    This study examines a complex network of documents and citations relating to the literature of the Ortega Hypothesis (as defined by Jonathan R. Cole and Stephen Cole), demonstrating the tenacity of errors in details of and meaning attributed to individual citations. These errors provide evidence that secondary and tertiary citing occurs in the literature that assesses individual influence through the use of citations. Secondary and tertiary citing is defined as the inclusion of a citation in a reference list without examining the document being cited. The authors suggest that, in the absence of error, it is difficult to determine the amount of secondary and tertiary citing considered normative. Therefore, to increase understanding of the relationship between citations and patterns of influence, it is recommended that large-scale studies examine additional instances of citation error.

  • Neven Sesardić, Making Sense of Heritability (pg135):

    …In my opinion, this kind of deliberate misrepresentation in attacks on hereditarianism is less frequent than sheer ignorance. But why is it that a number of people who publicly attack “Jensenism” are so poorly informed about Jensen’s real views? Given the magnitude of their distortions and the ease with which these misinterpretations spread, one is alerted to the possibility that at least some of these anti-hereditarians did not get their information about hereditarianism first hand, from primary sources, but only indirectly, from the texts of unsympathetic and sometimes quite biased critics.8 In this connection, it is interesting to note that several authors who strongly disagree with Jensen (Longino 199034ya; Bowler 198935ya; Allen 199034ya; Billings et al 199232ya; McInerney 199628ya; Beckwith 199331ya; Kassim 200222ya) refer to his classic paper from 196955ya by citing the volume of the Harvard Educational Review incorrectly as “33” (instead of “39”). What makes this mis-citation noteworthy is that the very same mistake is to be found in Gould’s Mismeasure of Man (in both editions). Now the fact that Gould’s idiosyncratic lapsus calami gets repeated in the later sources is either an extremely unlikely coincidence or else it reveals that these authors’ references to Jensen’s paper actually originate from their contact with Gould’s text, not Jensen’s.

  • “How citation distortions create unfounded authority: analysis of a citation network”, Greenberg 200915ya:

    …A complete citation network was constructed from all PubMed indexed English literature papers addressing the belief that β amyloid, a protein accumulated in the brain in Alzheimer’s disease, is produced by and injures skeletal muscle of patients with inclusion body myositis… The network contained 242 papers and 675 citations addressing the belief, with 220 553 citation paths supporting it. Unfounded authority was established by citation bias against papers that refuted or weakened the belief; amplification, the marked expansion of the belief system by papers presenting no data addressing it; and forms of invention such as the conversion of hypothesis into fact through citation alone. Extension of this network into text within grants funded by the National Institutes of Health and obtained through the Freedom of Information Act showed the same phenomena present and sometimes used to justify requests for funding.

  • “How accurate are citations of frequently cited papers in biomedical literature?”, Pavlovic et al 2020:

    …Findings from feasibility study, where we collected and reviewed 1,540 articles containing 2,526 citations of 14 most cited articles in which the 1st authors were affiliated with the Faculty of Medicine University of Belgrade, were further evaluated for external confirmation in an independent verification set of articles. Verification set included 4,912 citations identified in 2,995 articles that cited 13 most cited articles published by authors affiliated with the Mayo Clinic Division of Nephrology and Hypertension (Rochester, Minnesota, USA), whose research focus is hypertension and peripheral vascular disease. Most cited articles and their citations were determined according to SCOPUS database search. A citation was defined as being accurate if the cited article supported or was in accordance with the statement by citing authors. A multilevel regression model for binary data was used to determine predictors of inaccurate citations. At least one inaccurate citation was found in 11% and 15% of articles in the feasibility study and verification set, respectively, suggesting that inaccurate citations are common in biomedical literature. The main findings were similar in both sets. The most common problem was the citation of nonexistent findings (38.4%), followed by an incorrect interpretation of findings (15.4%). One fifth of inaccurate citations were due to “chains of inaccurate citations”, in which inaccurate citations appeared to have been copied from previous papers. Reviews, longer time elapsed from publication to citation, and multiple citations were associated with higher chance of citation being inaccurate….

  • “The problem of miscitation in psychological science: Righting the ship”, Cobb et al 2023

  • “Case study in major quotation errors: a critical commentary on the Newcastle-Ottawa scale”, Stang et al 2018


  1. Particularly:

    • McClelland & Dailey 197252ya, Improving officer selection for the Foreign Service. Boston, MA: Hay/McBer.

    • McClelland & Dailey 197351ya, Evaluating new methods of measuring the qualities needed in superior Foreign Service Officers. Boston: McBer.

    • McClelland & Dailey 197450ya, Professional competencies of human service workers. Boston: McBer and Co.

    Only the first two books appear available even in McClelland’s posthumous Harvard papers.↩︎