Some claim the disease hydrocephalus reduces brain size by 95% but often with normal or even above-average intelligence, and thus brains aren’t really necessary. Neither is true.
Hydrocephalus is a damaging brain disorder where fluids compress the brain, sometimes drastically decreasing its volume. While often extremely harmful or life-threatening when untreated, some people with severe compression nevertheless are relatively normal, and in one case (Lorber) they have been claimed to have IQs as high as 126 with a brain volume 5% of normal brains. A few of these case studies have been used to argue the extraordinary claim that brain volume has little or nothing to do with intelligence; authors have argued that hydrocephalus suggests enormous untapped cognitive potential which are tapped into rarely for repairs and can boost intelligence on net, or that intelligence/consciousness are non-material or tapping into ESP.
I point out why this claim is almost certainly untrue because it predicts countless phenomena we never observe, and investigate the claimed examples in more detail: the cases turn out to be suspiciously unverifiable (Lorber), likely fraudulent (Oliveira), or actually low intelligence (Feuillet). It is unclear if high-functioning cases of hydrocephalus even have less brain mass, as opposed to lower proxy measures like brain volume.
I then summarize anthropologist John Hawks’s criticisms of the original hydrocephalus author: his brain imaging data could not have been as precise as claimed, he studied a selective sample, the story of the legendary IQ 126 hydrocephalus patient raises questions as to how normal or intelligent he really was, and hydrocephalus in general appears to be no more anomalous or hard-to-explain than many other kinds of brain injuries, and in a comparison, hemispherectomies, removing or severing a hemisphere, has produced no anomalous reports of above-average intelligence (just deficits), though they ought to be just the same in terms of repairs or ESP.
That hydrocephalus cases can reach roughly normal levels of functioning, various deficits aside, can be explained by brain size being one of several relevant variables, brain plasticity enabling cognitive flexibility & recovery from gradually-developing conditions, and overparameterization giving robustness to damage and poor environments, and learning ability. The field of deep learning has observed similar phenomenon in training of artificial neural networks. This is consistent with Lorber’s original contention that the brain was more robust, and hydrocephalus was more treatable, than commonly accepted, but does not support any of the more exotic interpretations since put on his findings.
In short, there is little anomalous to explain, and standard brain-centric accounts appear to account for existing verified observations without much problem or resort to extraordinary claims.
An argument recently summarized by ex-biologist SF author Peter Watts argues that intelligence of human brains may have little to do with brain size variables such as neuron count, relying on an example of hydrocephalus, a medical condition:
For decades now, I have been haunted by the grainy, black-and-white x-ray of a human skull. It is alive but empty, with a cavernous fluid-filled space where the brain should be. A thin layer of brain tissue lines that cavity like an amniotic sac. The image hails from a 1980 review article in Science: Roger Lewin, the author, reports that the patient in question had “virtually no brain”. But that’s not what scared me; hydrocephalus is nothing new, and it takes more to creep out this ex-biologist than a picture of Ventricles Gone Wild.
What scared me was the fact that this virtually brain-free patient had an IQ of 126. He had a first-class honors degree in mathematics. He presented normally along all social and cognitive axes. He didn’t even realize there was anything wrong with him until he went to the doctor for some unrelated malady, only to be referred to a specialist because his head seemed a bit too large.
…The authors advocate research into “Computational models such as the small-world and scale-free network”—networks whose nodes are clustered into highly-interconnected “cliques”, while the cliques themselves are more sparsely connected one to another. De Oliveira et al suggest that they hold the secret to the resilience of the hydrocephalic brain. Such networks result in “higher dynamical complexity, lower wiring costs, and resilience to tissue insults.” This also seems reminiscent of those isolated hyper-efficient modules of autistic savants, which is unlikely to be a coincidence: networks from social to genetic to neural have all been described as “small-world”…The point, though, is that under the right conditions, brain damage may paradoxically result in brain enhancement. Small-world, scale-free networking—focused, intensified, overclocked—might turbocharge a fragment of a brain into acting like the whole thing. Can you imagine what would happen if we applied that trick to a normal brain?
Big if true. This is certainly big if true: were 90%+ of the brain unnecessary, what could it do if the rest of it was used as efficiently as the small percentage doing the actual work? And this overthrows the entire understanding of what the brain is, how it functions, and how intelligence works. Not to mention it has enormous implications for deep learning: why do we throw these vast resources into training ever-larger models in order to increase their capabilities, if all those neurons actually aren’t doing anything? We could boost AI capabilities dramatically by dispensing with the redundancy.
Implausible Implications
These claims however run into a lot of objections:
-
All the existing evidence that Thinking Is What Neurons Do: If neuron counts and correlates like brain volumes are so irrelevant, why does all other evidence point to their close correlation & causality, from cross-species comparisons to human genetic studies (eg. et al 2018 /et al 2019 & et al 2019 )? How do we explain lesion studies or traumatic brain injuries or…
-
Evolutionarily Absurd: if most of the brain can be dispensed with, then, as one of the most, if not the most, metabolically-expensive tissues in the body, why does evolution not extremely aggressively prune brains in humans?
And if it works in humans, then why not everywhere in the natural world? (Where are the hydrocephalus primates, or birds, or any mammals? Where are the sibling species where one has 20× the neurons of the other, because the latter has ‘flipped the switch’ and gotten the hydrocephalus benefit?) Differences in total volume or brain density are one thing, but a neuron is a neuron and will always be expensive to build, maintain, and operate—if 5% works fine or better, why an extravagance of 20×?
Or to put it another way: “where are the talking mice?”
-
Fermi’s problem: “Where Are The Aliens”? if the brain can work around such damage to the extent of boosting intelligence by 2SDs easily while having only a fraction of the brain to work with, why do we not see this workaround triggered by any drug, surgery, or environmental intervention, or triggered spontaneously at random, yielding potentially anywhere up to 40SDs? We live in a big world with billions of humans, and anything that can happen will happen (at least once)—everything from people growing tails to people sharing brains. Hydrocephalus is not that rare, and this mechanism must not be rare either if such extreme cases can be found so apparently easily. There should not be only a single forgettable case.
For perspective on what an absolute change of 20× neurons might look like, humans have >2× the neurons of chimpanzees; why do we not see any humans where the gap between their intelligence and ours is many times larger than that between us and chimpanzees?
-
Where Are The Geniuses? Similarly, supposing that the repair mechanism can never kick in for healthy humans, resulting in superintelligences, why does the supposed excess reported by Lorber not show up anywhere? If hydrocephalus can actually boost intelligence on average, why do we not see gross over-representations among high-IQ cohorts or professions or statuses?
It is a basic point of order statistics & tail effects that a small increase in a mean or variance compared to a reference group means that overrepresentation will increase the more extreme the cases; even if we do not see super-humans, we should still be struck by all the elite researchers who turn out to be hydrocephalic (and the more elite they are, the worse their hydrocephalus should be). But at least as far as Wikipedia is concerned, there are few hydrocephalics of note for any reason.
Extraordinary claims require extraordinary evidence. It is hard to see how a few case studies could seriously persuade us that everything we know about the brain is wrong and that we need to reboot neuroscience, as 2015 enthuses, to study how memory and intelligence are actually stored & conducted
in some extremely minute, subatomic, form, as yet unknown to biochemists and physiologists…outside the body—extracorporeal!…the brain [is] as a receptor/transmitter of some form of electromagnetic wave/particle… of course, when speaking of extracorporeal memory we enter the domain of “mind” or “spirit” with corresponding metaphysical implications.
Problems With The Case Studies
Fake data. Fortunately for Watts’s sleep, the case for hydrocephalus is much worse than it looks. The brain scan he posts is not, in fact, of the IQ 126 case; Oliveira captions it as images from his lab of a normal person/normal brain, a normal person/hydrocephalus, and a hydrocephalus patient with “deep cognitive and motor impairments”. Further, Oliveira et al lied about the origin of the images, which were copied from elsewhere, and the paper has been formally retracted.1
Suspiciously unverifiable anecdotes. OK, but what about the Lewin ‘review’ (which is really more of a journalistic article) and the IQ 126 guy? Lewin provides a pair of brain scans as well, but it’s important to note that he never implies that that scan had anything to do with that patient, and is retelling Lorber’s anecdote at third hand. Lorber provides no concrete details about him, including such basics as what sort of IQ test and when it was done (making it potentially as misleading as the perennial claim that Richard Feynman had a similar IQ, based on an anecdote in his biography about being tested as a child). Surprisingly, as far as I can tell, Lorber and associates (Lorber died 16 years later, in 199628ya) have not published anything at all on their dataset in the 39 years since the Lewin press coverage, and so there are no scans or data on this guy, or any of the others Lorber claimed to have above-normal intelligence.2
Misleadingly described anecdotes. How many severe hydrocephalus cases have even above-average intelligence? Despite Lorber’s claim to have found many cases of normal and a highly above-average hydrocephalus patient easily, subsequent researchers appear to have failed to do likewise over the ensuing 4 decades. 2014, cited by Watts, himself only cites 3 instances: Lorber’s unverifiable anecdotes via Lewin, the retracted & likely fraudulent Oliveira et al, and a third case study of et al 2007 who reports their patient having an IQ of 75.3 (I am puzzled how Forsydke could possibly describe Feuillet’s IQ of 75, nearly retarded, as one of “two independent confirmations” of Lorber, who was claiming a case with an IQ 3.4SDs higher than that; he says the same thing in 2015 in criticizing Hawks’s criticism, saying that Lorber had been vindicated by Oliveira & Feuillet. Forsydke otherwise dodges all of Hawks’s points.)
Increasingly less than meets the eye. So there appear to be few verifiable instances of severe hydrocephalics with above-average, much less highly above-average, intelligence. (I also have to wonder what more in-depth testing of cases, going beyond summary or full-scale IQ tests to many other measures of cognitive function like complex reaction time, and tests over a lifespan, would show; one can only detect deficits in what & when one tests, after all.)
But let us take Lorber at face-value and ask, does a brain scan showing, say, 95% of volume taken up by fluid imply that they have only 5% of the brain as a normal person or only 5% of the neurons?
Hawks on Lorber
Anthropologist John D. Hawks considers what hydrocephalus means and the Lorber anecdote in 200717ya. He points out that:
-
Measurement Error: 1970s CT scans were extremely inaccurate; it cannot detect accurately at the 1mm as claimed, and cannot show that the cortex was uniformly only 1mm, which is not that far from a normal human’s 2mm thickness. There may well be enough white matter total to still connect up the grey matter. As Lowin noted in 198044ya, the interpretation of brain scans is unreliable and may not give a good estimate of what exactly is there and how much.
Further, as few or no hydrocephalus cases in the normal range have been subject to detailed scanning or post-mortem, it is unclear if a given hydrocephalus case involves any net loss of neurons. It is possible that the high pressure merely increases neural density, which explains much of the cross-species differences already4 (One would not expect this to be a good thing, but then, hydrocephalus usually isn’t.)
-
Absurd Intelligence Boosts: Lorber’s claimed statistics imply increased intelligence, not merely robustness to damage:
But the notion that “half” of the patients where ventricle expansion is greater than 95% of the cranium have IQ’s greater than 100 is mathematically implausible. The definition of IQ is that the mean is 100. This means that only half of people without ventricle expansion have IQ over 100. Lorber seems to have claimed that the most severe cases of hydrocephalus actually see an increase in the proportion of high-IQ individuals, despite “many” being severely disabled. I’m not saying it’s impossible, but like “a millimeter or so”, this is the kind of statistic that deserves skepticism.
-
Selection Bias: Lorber was highly selective in the cases he studied, precisely because he was looking for anomalies; such cases intrinsically will have unknown compensating factors (such as high genetic potential for intelligence, which can buffer the damage), and can yield highly misleading results through things like Berkson’s paradox
-
Missing Caveats: the story of the IQ 126 student is bizarre, unless Lorber is leaving something out—who is referred to a hydrocephalus specialist for expensive & unusual brain imaging scans (especially back then) just because “his head seemed a bit too large”?
Nor is it obvious from the reports that the condition had “no” cognitive manifestations. Much seems to depend on the single case described above, with an apparently normal college student walking in off the street to discover he had minimal brain mass. But this story is quite obviously incredible as presented: most neurologists don’t perform brain scans just because a college student wears a large hat. It seems reasonable to infer that the student was referred by his doctor to Lorber, a hydrocephalus specialist, for some reason. We can only guess what the reason might be, but it hardly gives confidence in the anecdote!
-
Questionable Damage: Hawks concludes that anomalies may not be too useful to study here:
Without question, there are many patients who have this outcome—no substantial cognitive deficit compared to nonpatients, despite profound pathology. This is true of almost any pathology affecting the brain, including tumors, strokes, and developmental abnormalities. The question is whether this provides a valid model for understanding the adaptive importance of brain volume. It seems that later onset hydrocephalus, where a normal brain is compressed within a relatively normal-sized skull by cerebrospinal fluid pressure, does not really apply to the evolutionary question. The reported cases do not apparently involve substantial gray matter tissue loss. A “thin” cortex does not necessarily imply functionally small cortical volume, even with substantial white tissue loss.
Counter-example: no high-IQ hemispherectomies. Hawks then goes on to examine hemispherectomy, an operation to treat the most severe epileptic cases by severing one brain hemisphere from another or removing it entirely. Unsurprisingly, hemispherectomy patients have many severe cognitive problems: eg. et al 2003 finds no large cognitive decrease from a hemispherectory in 33 mostly young children, but this is in the context of their pre-operative extensive “developmental delays” and regressions (with the percentage of “severe delay” approaching 100% the longer the exposure); et al 2013 note, among their many language/reading/behavioral/school problems requiring intensive interventions such as special schooling (as well as evolutionarily-relevant side-effects like partial paralysis), that only “5 (21%) of the 24 [post-hemispherectomy] patients older than 18 years of age were gainfully employed.” This is still a success for hemispherectomy, since severe epilepsy badly damages normal development and can be fatal (and as Hawks points out, since the brain tissue being removed or severed is so badly damaged already by seizures as frequent as several per day, it’s not necessarily much of a loss), and incidentally demonstrates how cognitive reserve/brain plasticity allows compensation for losing access to a whole hemisphere—at least when done young enough, before the damage has accumulated. Another extreme case is hydranencephaly: the true loss of brain, unlike hydrocephalus, but there are no case reports of normal or above-average cognitive functioning; hydranencephaly cases, while normal-seeming for months after birth, are instead profoundly disabled, barely respond to stimuli, and are lucky to live for more than a few years (et al 2000 ), with 19 years being the record (the 1976 case).
What Does Hydrocephalus Mean?
Hawks concludes:
So the case of hemispherectomy does not test the proposition that normal cognitive performance is possible after a great reduction in brain size. Instead it possibly tests the proposition that a reduction in brain size may be consistent with normal cognitive performance under a specialized cultural and environmental regime. That hypothesis is refuted by the majority of cases in the clinical record, for whom the specialized learning environment has not managed to eliminate developmental deficits. Still, for many patients some combination of surgery, therapy and learning assistance do make a decisive difference, and they attain normal cognitive performance—even normal for developmental age.
How do these cases apply?
There is no single conclusion that we can draw from these examples of extreme pathological reduction in brain size in humans. Clearly, the brain is capable of remarkable plasticity in development, including alternate localizations of some functions that are highly localized in most adults. But can we apply this plasticity more generally, to suggest that almost any brain structure might have evolved in ancient human populations? Even those that involve immense reductions in overall brain size?
Robustness to damage & environmental suboptimality. It should be mentioned that these assessments build on a rather narrow view of “cognition.” For instance, all hemispherectomy patients have some paralysis on the opposite side from the removed hemisphere. The functions of the motor and sensory cortices of the absent side do not appear to have the developmental plasticity exhibited by language. From the perspective of fitness in prehistoric human populations, the adequate control of movement and perception of sensory information would have a substantially greater importance than in today’s cultural milieu. So a reduction in brain size that impacts motor and sensory function but leaves other aspects of cognition intact certainly cannot be said to have no impact. Just because a reduction in performance can be managed within our population does not mean that it could have evolved in some past population. Also, the attainment of “normal” cognition, however defined, requires substantial investment and teaching for the average human. Humans with developmental challenges often can attain normal cognitive performance for their age, particularly when supplementary teaching and therapy is available. All this is to say that human brains are coadapted with behavioral patterns that channel development…Although it may be conceivable—even if it is far from demonstrated—that a radically smaller brain coupled with a specialized culture might have increased fitness, apparently there was no available evolutionary pathway to that adaptation. I would guess that it is simply more difficult to maintain the necessary cultural specializations for such an adaptation within the context of ancient human population structure. It is easier to accomplish development with a large brain that can employ many bottom-up strategies to build its cognitive abilities.
Thus the lost (or just compressed) white matter may be no more ‘unnecessary’ than, say, one’s spare lung or kidney, most of one’s liver, or any of one’s limbs… et al 2019 , describing a mouse hydrocephalus case, ask:
Survival requires sensing the environment, processing the information, and responding appropriately. There is no mortal threat or competition to survive for a rat maintained in a cage for 2 years under controlled environmental conditions with food and water ad libitum. R222 is a study of the fundamental neurobiological and behavioral processes that sustain a resource adequate, safe, ambulatory life. The severity of the hydrocephalus in R222 in the face of normal body weight and growth, normal motor behavior and spatial memory, and evoked activity to smells, tactile stimulation and vision would suggest neuroadaptation to a life-long abnormality. This rare case can be viewed as one of nature’s miracles providing the unique opportunity to examine the brain’s capacity for neuroplasticity and reorganization necessary for survival…Is the cortex necessary? There have been numerous studies across a variety of mammals looking at the developmental consequences of radical decortication in neonates24,25,26,27,28,29,30. While there are minor deficits particularly in some motor patterns and motor coordination, the decorticate animal can eat, drink, sleep and grow to normal size. They respond to visual and auditory stimuli. They display normal species-specific social, maternal, aggressive and sexual behaviors. They mate and reproduce. Nature again has provided science with an extreme form of decortication in humans–hydranencephaly, a rare, inherited disorder where by babies are born without cerebral hemispheres. There is no treatment, yet incredibly, with the proper care and stabilization these individuals can live for years31, not in a vegetative state and are responsive to their surroundings.
This might sound reasonable, but how does Ferris define ‘necessary’? Human hydranencephaly cases, as discussed, are little better off than coma victims: McAbee describe their first case’s “responsive to their surroundings” as consisting of “turn[ing] his head ipsilaterally to sounds, music, and venipuncture.” & requiring a feeding tube, and the second case “He had no definite awareness of his environment. The pupillary response to light was minimal; he did not fix or follow visually and had no visual response to threat.” How about the mating and reproducing—surely complex behaviors, and if decortication allows even that, what do we need a cortex for? Looking at et al 2019’s citations, the only 2 references for that are less than impressive: for females ( et al 1982 ), “mate and reproduce” apparently means only passively laying in place with one’s rump elevated in lordosis, and for the males (1985), when paired with receptive (normal) females, they were more infertile than normal rats and had to be manually “stimulated” by the researchers! These cases show that the cortex is unnecessary only if one has a low standard for what is necessary, which is detached from any real-world considerations.
I would predict that deeper study of such hydrocephalus cases would reveal the following:
-
far less neural loss than expected based on volume changes, consistent with loss of a large percentage being crippling or fatal
-
a correlation between neuron loss & performance deficits, where higher-performing hydrocephalus is due to greater density for a given volume reduction or the same mass being distributed in a weirder and harder-to-measure way
-
systemic performance deficits, where small deficits in each task add up to large total deficits on complex tasks—increasing in harder more-naturalistic tasks requiring more memory, learning, and cognitive flexibility
-
reproductive fitness penalties, with greater fitness penalties in environments which are more competitive & resemble the ancestral environment (eg. the Darwinian organismal performance assay), as opposed to modern environments which may be undemanding.
Considering cases like Feuillet’s case, who worked as a French “civil servant”, one might wonder if this indicates intelligence is not all that important: how could someone with an IQ of 74 so much as feed and dress themselves, one might say, much less hold a “white-collar job” or be married with children?
But one should remember that cases of mental retardation may be caused by disorders that have many profoundly disabling side-effects, exaggerating the dysfunctionality; and that otherwise-healthy normal individuals can have many coping strategies & supportive environments even while still having astonishing deficits. This is comparable to cognitive decline & senility in the elderly, or to adult illiteracy: illiterate adults (often due to dyslexia) can be surprisingly functional & fool everyone who knows them by exploiting a range of tricks, ranging from heavy reliance on oral communication like telephones and getting other people to read for them to avoiding writing-heavy environments in favor of concrete ones to skilled guessing to memorizing specific pieces of writing & pretending to read it to simply lying. Human Rights 2001 describes how retarded criminal defendants, even ones being prosecuted for the death penalty, would deny they are retarded or fool their lawyers into signing them up for college calculus or lied about finishing high school.
Nevertheless, despite the illusion of competency, the retarded may still have profound deficits, as Human Rights Watch 200123ya then goes on to give examples: defendants who don’t know how to use stamps or what their stomach does, who count on their hands, who will agree to anything an authority figure says, who don’t know why rape is wrong (suggesting because they lacked “permission” or it was “against her religion”), who understand stabbing but not shooting kills someone, who smile or sleep through their trials, who ask what to wear to their funeral after being executed… In ordinary circumstances, which permit extreme levels of repetition & reliance on others to support a safe environment, they may appear as functional as anyone else, yet they will make rare (but extremely important) errors and cannot handle novel problems.5 (To ruin one’s life, one only has to screw up once.) Gregory’s McNamara’s Folly: The Use of Low-IQ Troops in the Vietnam War describes many examples from Project 100,000 of retarded or low-intelligence men who could function in a safe civilian environment doing simplified menial labor, but could not be trained to do anything safely in the military or cope with novel environments, suffering extreme casualties, often self-inflicted. While any individual instant may seem unimportant—there are many ways to avoid needing to understand how to use stamps oneself, and most people would never rape a woman even if they don’t understand why rape is bad—they add up over a lifetime and push the entire population towards bad outcomes. One might note that Feuillet’s case had a below-average number of children and his “white-collar” job may mean nothing more than stamping the same form for 40 years in the DMV as a kind of covert welfare; or consider the profound consequences for individuals with reduced/no pain sensitivity.
Analogies from Deep Learning
Artificial parallels in Deep Learning. Hawks’s discussion of channeling leads to an interesting parallel with deep learning (DL): it has been repeatedly established that all neural networks are ‘overparameterized’ in the sense that whatever performance a NN reaches at the end of training, one can easily ‘distill’ or ‘prune’ it into a smaller (or faster) NN with slightly worse performance, by at least a fraction and sometimes by orders of magnitudes.
Despite neuron/parameter count clearly causing NN ‘intelligence’. Parameter & neuron counts in NNs clearly do cause bigger NNs are more expressive than smaller ones, because when we experimentally vary neural net size, we see logarithmic/power-law increases in performance, even for the largest trainable NNs; but at the same time, we still see that of course size does not explain 100% of variance in performance—the same architecture trained in the same way on the same data will yield NNs with noticeably different final performance, simply due to the stochascity of the initialization, regularization mechanisms like dropout, and data selection6, and it would be possible to find runs where a smaller NN outperforms a larger NN which is otherwise exactly the same, yet, nevertheless, larger is more powerful. Similarly, distillation/compression work quite well, and yet cannot be tapping into any sort of dualism or psi.
DL demonstrates functional benefits of overparameterization. Why? The recent work on the ‘lottery ticket hypothesis’ strongly indicates that most of the work is being done by a relatively small part of the network, and the expressive power of the full model, which is enormously rich, is largely unnecessary for the task we apply it to. This raises the puzzling question of how we can get away with training such large models without them overfitting or learning useless things, when standard statistical & machine learning theory teaches that this ought to be a cardinal sin and guaranteed futile. And peculiarly, knowing that our NN is far too big & we only need a small NN does not let one train the final smaller NN directly: if one takes a smaller NN and trains a new from-scratch NN with the same size (which one knows is big enough by construction), it simply doesn’t work.
Overparameterization means always possible improvements. Why not? The answer seems to be that overparameterization is necessary to enable efficient learning, by making the ‘fitness landscape’ smooth and ensuring there is always a parameter which can be tweaked to make some learning progress, while smaller NNs are more likely to get ‘stuck’ in a bad local optimum, and have no way out. (An example of a ‘rough’ fitness landscape would be almost all computer programming languages: deleting a single character like a semicolon may make a program go from 100% performance on a task to 0% and perhaps may not even compile.) This is partially due to having so many random sub-networks that at least one of them ‘wins the lottery’ and is a good fit for the current task.
Thus, one can start off with a big model and then learn effectively, and specialize down into a much smaller but inflexible model, but one cannot learn the smaller model to begin with from the raw data (unless one is exceedingly lucky and the small model starts off with a useful initialization and happens to not run into any traps); the small model can only be created based on heavy guidance from the big model. So it can be simultaneously true that big models can reach good final small models (overcoming damage or loss7), and that the big model was necessary after all.
For a human brain, which must do lifelong learning on a constantly changing mix of tasks, in a harsh evolutionary context filled with competitive zero-sum dynamics where small differences can matter & there is no ‘big model’ to copy or learn from, this suggests that relying on a small brain would be a terrible idea. Since—unlike software models—here is no way to train a human brain to adulthood and then scoop it out and shrink it to a much more efficient version and copy it everywhere else, regardless of how good it has become, every human brain must start effectively from scratch, and cannot benefit from such compression, and human brains must be overparameterized to ensure it can meet whatever long tail of challenges the world throws at it. (Software, on the other hand, can be copied, which has interesting implications for the ‘hardware overhang’ argument.)
Conclusion: No Evidence
Unlikely, poor evidence, and easily explained. So to sum up: people have claimed that hydrocephalus destroys most of the brain yet having only a fraction of the brain is consistent with normal or above-average intelligence, and may even increase it. This is extremely implausible based on everything we know about intelligence and evolution and population distributions and is a bad description of what hydrocephalus does (conflating distribution & volume with brain matter). On average, hydrocephalus and similar things like hemispherectomy do (as expected) induce many deficits ranging from mild to severe, which can be accommodated to some degree by neural plasticity, individual differences, extensive environmental interventions, and freedom from natural selection. Claims to the contrary, aside from failing to deal with the many objections that this is implausible, turn out to be based on undocumented, fraudulent, or misleadingly described cases, and primarily pushed by cranks. Ultimately, hydrocephalus does not appear to present any particular challenge to the standard understanding of intelligence as being caused by a material brain whose efficiency at cognitive tasks is driven by neuron count, wiring patterns, neural integrity, and general health: not only is the evidence extraordinarily inadequate to justify the extraordinary claims made by some authors, it is unclear how much, if any, evidence there is at all.
External Links
-
How many neurons does an optimized system need?
-
“8 pairs of descending visual neurons in the dragonfly give wing motor centers accurate population vector of prey direction”, Gonzalez-et al 2013
-
“How the Zombie Fungus Takes Over Ants’ Bodies to Control Their Minds”; “Invisible Designers: Brain Evolution Through the Lens of Parasite Manipulation”, Del 2019
-
My notes say the plagiarism was first spotted by Ondřej Havlíček on the Neuroskeptic blog. Why would any researcher lie about images like that? Who knows. But it demonstrates why one can’t put too much trust in any single datapoint. I have raised my concerns about the implausibility of these hydrocephalus claims & the retraction with Watts on his blog and 3 emails so far.↩︎
-
I find such lacunae to be suspicious. When you look at the most questionable findings and experiments in Reproducibility Crisis & psychology-related matters, like the Stanford Prison Experiment, Rosenhan’s “Being Sane in Insane Places”, or Mouse Utopia, one of the more common hallmarks is a single extraordinary finding which makes a splash with press attention, and then a strange disclination by the original author to replicate their own finding or publish anything further on it which is not a secondary-source-style rehash.↩︎
-
Feuillet et al 200717ya is sometimes misquoted as measuring an IQ of 84; that refers only to the highest subtest score, not the total test score.↩︎
-
See Herculano-Houzel’s research, like “The remarkable, yet not extraordinary, human brain as a scaled-up primate brain and its associated cost”, Herculano-Houzel 201212ya.↩︎
-
One might say that this is inherently true of any kind of abstraction or concept or model or long-term memory: if one needed them frequently, they could be handled directly by trial-and-error or short-term memory. If evaluated in ordinary common circumstances, their value will be hidden. They are valuable because they can extrapolate to novel circumstances, or while used rarely and most uses of little value, a few uses are extremely important. A fact may be useful just once in a lifetime, but that use may save one’s life; and, in all the years up to the danger, such a person would look identical to someone who didn’t or couldn’t learn that fact. One is reminded of how neural net-generated text like GPT-2 can look human-written save for a single stray character—a “her” instead of “he”—which shatters the illusion.↩︎
-
Or hardware nondeterminism—GPUs are actually nondeterministic and not guaranteed to return the same result for a given operation! This is just one of the many issues with exact reproducibility in deep learning. Deep reinforcement learning particularly struggles with this lack of reproducibility and still higher variance than regular DL.↩︎
-
Large neural net models are able to bypass or overcome issues like dropout or label noise or loss of layers, and are remarkably robust to severe damage or noise (eg. the extreme of weight agnostic neural networks).↩︎