Conversely, entire branches of knowledge can be lost if not enough people are working in the area to maintain a common ground of understanding.
In the US, we keep on manufacturing Abrahams tanks. We're not at war. We have no use for these tanks. So to make things make sense, we give money to some countries with the explicit restriction that they must spend that money on these tanks.
Why do we keep making them? Because you need people who, on day one of war, know how to build that tank. You can't spend months and months getting people up to speed - they need to be ready to go. So, in peacetime, we just have a bunch of people making tanks for "no reason".
It's one of the great reasons to cultivate a collection of close allies who you support: it keeps your production lines warm and your workforce active and developing.
there's a nice short story along those lines, by Scott Alexander
https://slatestarcodex.com/2017/11/09/ars-longa-vita-brevis/
My own experience in watching citation patterns, not even with things that I've worked on, is that certain authors or groups attract attention for an idea or result for all kinds of weird reasons, and that drives citation patterns, even when they're not the originator of the results or ideas. This leads to weird patterns, like the same results before a certain "popular" paper being ignored even when the "popular" paper is incredibly incremental or even a replication of previous work; sometimes previous authors discussing the same exact idea, even well-known ones, are forgotten in lieu of a newer more charismatic author; various studies have shown that retracted zombie papers continue to be cited at high rates as if they were never retracted; and so forth and so on.
I've kind of given up trying to figure out what accounts for this. Most of the time it's just a kind of recency availability bias, where people are basically lazy in their citations, or rushed for time, or whatever. Sometimes it's a combination of an older literature simply being forgotten, together with a more recent author with a lot of notoriety for whatever reason discussing the idea. Lots of times there's this weird cult-like buzz around a person, more about their personality or presentation than anything else — as in, a certain person gets a reputation as being a genius, and then people kind of assume whatever they say or show hasn't been said or shown before, leading to a kind of self-fulfilling prophecy in terms of patterns of citations. I don't even think it matters that what they say is valid, it just has to garner a lot of attention and agreement.
In any event, in my field I don't attribute a lot to researchers being famous for any reason other than being famous. The Matthew effect is real, and can happen very rapidly, for all sorts of reasons. People also have a short attention span, and little memory for history.
This is all especially true of more recent literature. Citation patterns pre-1995 or so, as is the case with those Wikipedia citations, are probably not representative of the current state.
I can recommend two of his works:
- The Revolt of Masses (mentioned in the article), where he analyzes the problems of industrial mass societies, the loss of self and the ensuing threats to liberal democracies. He posits the concept of the "mass individual" (hombre masa) a man who is born into the industrial society, but takes for granted the progress - technical and political - that he enjoys, does not enquire about the origins of said progress or his relationship to it, and therefore becomes malleable to illiberal rhetoric. It was written in ~1930 and in many ways the book foresees the forces that would lead to WWII. The book was an international success in its day but it remains eerily current.
- His Meditations on Technics expose a rather simple, albeit accurate philosophy of technology. He talks about the history of technology development, from the accidental (eg, fire), to the artisanal, to the age of machines (where the technologist is effectively building technology that builds technology). He also explains the dual-stage cycle in which humans switch between self-absorption (ensimismamiento) and think about their discomforts, and alteration, in which they decide to transform the world as best as they can. The ideas may not be life-changing but it's one of these books that neatly models and settles things you already intuited. Some of Ortega's reflections often come to mind when I'm looking for meaning in my projects. It might be of interest for other HNers!
That distance between when the two (or more) similar discoveries happened gives insight into how difficult it was. Separated by years, and it must have been very difficult. Separated by months or days, and it is likely an obvious conclusion from a previous discovery. Just a race to publish at that point.
- Newton - predicts that most advances are made by standing on the shoulders of giants. This seems true if you look at citations alone. See https://nintil.com/newton-hypothesis
- Matthew effect - extends successful people are successful observation to scientific publishing. Big names get more funding and easier journal publishing, which gets them more exposure, so they end up with their labs and get their name on a lot of papers. https://researchonresearch.org/largest-study-of-its-kind-sho...
If I was allowed to speculate I would make a couple of observations. First one is that resources play a huge role in research, so overall progress direction is influenced more by the economics rather than any group. For example every component of a modern smartphone got hyper optimized via massive capital injections. Second one is that this is a real world and thus likely some kind of power law applies. I don't know the exact numbers, but my expectation is that top 1% of researches produce way more output than bottom 25%.
Leibniz did the same, in the same timeframe. I think this lends credence to the Ortega hypothesis. We see the people that connect the dots as great scientists. But the dots must be there in first place. The dots are the work of the miriad nameless scientists/scholars/scribes/artisans. Once the dots are in place, somebody always shows up to take the last hit and connect them. Sometimes multiple individuals at once.
That is not plausible IMO. Nobody has capacity to read the works of a miriad of nameless scientists, not even Isaac Newton. Even less likely that Newton and Leibnitz were both familiar with the same works of minor scientists.
What is much more likely, is that well-known works of other great mathematicians prepared the foundation for both to reach similar results
It gets condensed over time. Take for example Continental Drift/Plate Tectonics theory. One day Alfred Wegener saw the coasts of West Africa and East South America were almost a perfect fit, and connected the dots. But he had no need to read the work of the many surveyors that braved unknown areas and mapped the coasts of both continents in the previous 4-5 centuries, nautical mile by nautical mile, with the help of positional astronomy. The knowledge was slowly integrated, cross checked and recorded by cartographers. Alfred Wegener insight happened an the end of a long cognitive funnel.
What does not go against the hypothesis. Both of their works were heavily subsided by less known researchers that came before them. But it's not clear at all if somebody else would do what they did on each of their particular field. (Just like it's not clear the work they built upon was in any way "mediocre".)
It's very hard to judge science. Both in predictive and retrospective form.
Special Relativity was not such a big breakthrough. Something like it was on the verge of being described by somebody in that timeframe — all the pieces were in place, and science was headed in that direction.
But General Relativity really took everyone by surprise.
At least, that's my understanding from half-remembered interviews from some decades ago (:
It might be that we attribute post hoc greatness to a small number of folks, but require a lot of very interested / ambitious folks to find the most useful threads to pull, run the labs, catalog data, etc.
It's only after the fact that we go back and say "hey this was really useful". If only we knew ahead of time that Calculus and "tracking stars" would lead to so many useful discoveries!
There's a ton of this among all historical figures in general. Any great person you can name throughout history, almost without exception, were born to wealthy connected families that set them on their course. There are certainly exceptions of self made people here and there, and they do tend to be much more interesting. But just about anyone you can easily name in the history math/science/philosophy were rich kids who were afforded the time and resources to develop themselves.
Giants can be wrong, though; so there's a "giants were standing on our shoulders" problem to be solved. The amyloid-beta hypothesis held up Alzheimer's work for decades based on a handful of seemingly-fraudulent-but-never-significantly-challenged results by the giants of the field.
Kuhn's "paradigm shift" model speaks to this. Eventually the dam breaks, but when it does it's generally not by the sudden appearance of new giants but by the gradual erosion of support in the face of years and years of bland experimental work.
See also astronomy right now, where a never-really-satisfying ΛCDM model is finally failing in the face of new data. And it turns out not only from Webb and new instruments! The older stuff never fit too but no one cared.
Continental drift had a similar trajectory, with literally hundreds of years of pretty convincing geology failing to challenge established assumptions until it all finally clicked in the 60's.
This is hilarious
Ortega hypothesis - https://news.ycombinator.com/item?id=20247092 - June 2019 (1 comment)
I wonder if this is because a paper with such a citation is likely to be taken more seriously than a citation that might actually be more relevant.
Another related "rich get richer" effect is also that a famous author or institution is a noisy but easy "quality" signal. If a researcher doesn't know much about a certain area and is not well equipped to judge a paper on its own merits, then they might heuristically assume the paper is relevant or interesting due to the notoriety of the author/institution. You can see this easily at conferences - posters from well known authors or institutions will pretty much automatically attract a lot more visitors, even if they have no idea what they're looking at.
There's also a monkey see, monkey do aspect, where "that's just the way things are properly done" comes into play.
Peer review as it is practiced is the perfect example of Goodhart's law. It was a common practice in academia, but not formalized and institutionalized until the late 60s, and by the 90s it had become a thoroughly corrupted and gamed system. Journals and academic institutions created byzantine practices and rules and just like SEO, people became incentivized to hack those rules without honoring the underlying intent.
Now significant double digit percentages of research across all fields meet all the technical criteria for publishing, but up to half in some fields cannot be reproduced, and there's a whole lot of outright fraud, used to swindle research dollars and grants.
Informal good faith communication seemed to be working just fine - as soon as referees and journals got a profit incentive, things started going haywire.
Big names give more talks in more places and people follow their outputs specifically (e.g., author-based alerts on PubMed or Google Scholar), so people are more aware of their work. There are often a lot of papers one could cite to make the same point, and people tend to go with the ones that they've already got in mind....
Compare this with paradigm shifts in T. S. Kuhns The structure of scientific revolutions:
https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...
AlexNet for example was only possible because of the developed algorithms, but also the availability of GPUs for highly parallel processing and importantly the ImageNet labelled data.
I guess the Ortega equivalent statement would be "I stood on top of a giant pile of tiny people"
...Not quite as majestic, but hey, if it gets the job done...
But you can't tell ahead of time which one is which. Maybe you can shift the distribution but often your pathological cases excluded are precisely the ones you wanted to not exclude (your Karikos get Suhadolniked). So you need to have them all work. It's just an inherent property of the problem.
Like searching an unsorted n list for a number. You kind of need to test all the numbers till you find yours. The search cost is just the cost. You can't uncost it by just picking the right index. That's not a meaningful statement.
it seems clear to me that the downside of society having a bad scientist is relatively low, so long as theres a gap between low quality science and politics [0], while the upside is huge.
My scientific study of the science of science study can prove this. Arxiv preprint forthcoming.
Nature is usually 80/20. In other words, 80% of researchers probably might as well not exist.
That said if we took 20% of all working people are doing useful work, then can you guarantee not all research scientists are within that category?
And indeed there are different fields and the distributions of effectiveness may be incomparable.
I think the nature of scientific and mathematical research is interesting in that often "useless" findings can find surprising applications. Boolean algebra is an interesting example of this in that until computing came about, it seemed purely theoretical in nature and impactless almost by definition. Yet the implications of that work underpinned the design of computer processors and the information age as such.
This creates a quandary: we can say perhaps only 20% of work is relevant, but we don't necessarily know which 20% in advance.
Imagine 2 Earths : one with 10 million researchers and the other with 2 million, but the latter is so cut-throat that the 2 million are Science Spartans.
> Due to their mistrust of others, Spartans discouraged the creation of records about their internal affairs. The only histories of Sparta are from the writings of Xenophon, Thucydides, Herodotus and Plutarch, none of whom were Spartans.
A good example, but perhaps not the point you wanted to make.
What does this even mean? Do you think in an ant colony only the queen is needed? Or in a wolf pack only the strongest wolf?
More specifically, I believe that scientific research winds up dominated by groups who are all chasing the same circle of popular ideas. These groups start because some initial success produced results. This made a small number of scientists achieve prominence. Which makes their opinion important for the advancement of other scientists. Their goodwill and recommendations will help you get grants, tenure, and so on.
But once the initial ideas are played out, there is little prospect of further real progress. Indeed that progress usually doesn't come until someone outside of the group pursues a new idea. At which point the work of those in existing group will turn out to have had essentially no value.
As evidence for my belief, I point to https://www.chemistryworld.com/news/science-really-does-adva.... It documents that Planck's principle is real. Fairly regularly, people who become star researchers, wind up holding back further process until they die. After they die, new people can come into the field, pursuing new ideas, and progress resumes. And so it is that progress advances one funeral at a time.
As a practical example, look at the discovery of blue LEDs. There was a lot of work on this in the 70s and 80s. Everyone knew how important it would be. A lot of money went into the field. Armies of researchers were studying compounds like zinc selenide. The received wisdom was that galium nitride was a dead end. What was the sum contribution of these armies of researchers to the invention of blue LEDs? To convince Shuji Nakamura that if that was the right approach, he had no hope. So he went into galium nitride instead. The rest is history, and the existing field is lost.
Let's take an example that is still going on. Physicists invented string theory around 50 years ago. The problems in the approach are summed up in the quote that is often attributed to Feynman, *"String theorists don't make predictions, they make excuses." To date, string theory has yet to produce a single prediction that was verified by experiment. And yet there are thousands of physicists working in the field. As interesting as they found their research, it is unlikely that any of their work will wind up contributing anything to whatever future improved foundation is discovered for physics.
Here is a tragic example. Alzheimer's is a terrible disease. Very large amounts of money have gone into research for a treatment. The NIH by itself spends around $4 billion per year on this, on top of large investments from the pharmaceutical industry. Several decades ago, the amyloid beta hypothesis rose to prominence. There is indeed a strong correlation between amyloid beta plaques and Alzheimer's, and there are plausible mechanisms by which amyloid beta could cause brain damage.
After several decades of research, and many failed drug trials, support the following conclusion. There are many ways to prevent the buildup of amyloid beta plaques. These cure Alzheimer's in the mouse model that is widely used in research. These drugs produce no clinical improvement in human symptoms. (Yes, even Aduhelm, which was controversially approved by the FDA in 2021, produces no improvement in human symptoms.) The widespread desire for results has created fertile ground for fraudsters. Like Marc Tessier-Lavigne, whose fraud propelled him to becoming President of Stanford in 2016.
After widespread criticism from outside of the field, there is now some research into alternate hypotheses about the root causes of Alzheimer's. I personally think that there is promise in research suggesting that it is caused by damage done by viruses that get into the brain, and the amyloid beta plaques are left by our immune response to those viruses. But regardless of what hypothesis eventually proves to be correct, it seems extremely unlikely to me that the amyloid beta hypothesis will prove correct in the long run. (Cognitive dissonance keeps those currently in the field from drawing that conclusion though...)
We have spend tens of billions of dollars over several decades on Alzheimer's research. What is the future scientific value of this research? My bet is that it is destined for the garbage, except as a cautionary tale about how much damage it can cause when a scientific field becomes unwilling to question its unproven opinions.
What a funny example to pick. See, "string theory" gets a lot of attention in the media, and nowhere else
In actual physics, string theory is a niche of a niche of a niche. It is not a common topic of papers or conferences and does not receive almost anything in funding. What little effort it gets it gets because paper and pencils for some theoretical physics is vastly cheaper than a particle accelerator or space observatory.
Physicists don't really use or do anything with string theory.
This is a great example of what is a serious problem in science.
The public reads pop-sci and thinks they have a good understanding of science. But they verifiably do not. The journalists and writers who create this content are not scientists, do not understand science, and do not have a good view into what is "meaningful" or "big" in science.
Remember cold fusion? It was never considered valid in the field of physics because they did a terrible excuse for "science", went on a stupid press tour, and at no point even attempted to disambiguate the supposed results they claimed. The media however told you this was something huge, something that would change the world.
It never even happened.
Science IS about small advances. Despite all the utter BS pushed by every "Lab builds revolutionary new battery" article, Lithium ion batteries HAVE doubled in capacity over a decade or two. It wasn't a paradigm shift, or some genius coming out of the woodwork, it was boring, dedicated effort from tens of thousands of average scientists, dutifully trying out hundreds and hundreds of processes to map out the problem space for someone else to make a good decision with.
Science isn't "Eureka". Science is "oh, hmm, that's odd...." on reams of data that you weren't expecting to be meaningful.
Science is not "I am a genius so I figured out how inheritance works", science is "I am an average guy and laboriously applied a standardized method to a lot of plants and documented the findings".
Currently it is Nobel Prize week. Consider how many of the hundreds of winners whose name you've never even heard of.
Consider how many scientific papers were published just today. How many of them have you read?
That seems correct to me. Imagine having a hypothesis named after you that a) you disagree with, and b) seems fairly doubtful at best!
("Ortega most likely would have disagreed with the hypothesis that has been named after him...")
- Significant advances by individuals or small groups (the Newtons, Einsteins, or Gausses of the world), enable narrowly-specialized, incremental work by "average" scientists, which elaborate upon the Great Advancement...
- ... And then those small achievements form the body of work upon which the next Great Advancement can be built?
Our potential to contribute -- even if you're Gauss or Feynman or whomever -- is limited by our time on Earth. We have tools to cheat death a bit when it comes to knowledge, chief among which are writing systems, libraries of knowledge, and the compounding effects of decades or centuries of study.
A good example here might be Fermat's last theorem. Everyone who's dipped their toes in math even at an undergraduate level will have at least heard about it, and about Fermat. People interested in the problem might well know that it was proven by Andrew Wiles, who -- almost no matter what else he does in life -- will probably be remembered mainly as "that guy who proved Fermat's last theorem." He'll go down in history (though likely not as well-known as Fermat himself).
But who's going to remember all the people along the way who failed to prove Fermat? There have been hundreds of serious attempts over the four-odd centuries that the theorem had been around, and I'm certain Wiles had referred to their work while working on his own proof, if only to figure out what doesn't work.
---
There's another part to this, and that's that as our understanding of the world grows, Great Advancements will be ever more specialized, and likely further and further removed from common knowledge.
We've gone from a great advancement being something as fundamental as positing a definition of pi, or the Pythagorean theorem in Classical Greece; to identifying the slightly more abstract, but still intuitive idea that white light is a combination of all other colours on the visible spectrum and that the right piece of glass can refract it back into its "components" during the Renaissance; to the fundamentally less intuitive but no less groundbreaking idea of atomic orbitals in the early 20th century.
The Great Advancements we're making now, I struggle to understand the implications of even as a technical person. What would a memristor really do? What do we do with the knowledge that gravity travels in waves? It's great to have solved n-higher-dimensional sphere packing for some two-digit n... but I'll have to take you at your word that it helps optimize cellular data network topology.
The amount of context it takes to understand these things requires a lifetime of dedicated, focused research, and that's to say nothing of what it takes to find applications for this knowledge. And when those discoveries are made and their applications are found, they're just so abstract, so far removed from the day-to-day life of most people outside of that specialization, that it's difficult to even explain why it matters, no matter what a quantum leap that represents in a given field.
It’s a bizarre debate when it’s glaringly obvious that small contributions matter and big contributions matter as well.
But which contributes more, they ask? Who gives a shit, really?
I think most would be very open to be checked on their priors, but I would be very surprised if those could be designated a single color. In fact, the humanities revel in various hues and grays rather than stark contrasts.
Not at all obvious to me. What were the small contributions to e.g. the theory of gravity?
I guess Kepler got by just using Brahe's observations, but for more modern explorations of gravity there's a boatload of people collecting data.
And Einstein didn’t pull out special relativity out of his brain alone. There were years of intense debate about the ether and things I totally forgot by now.
And take something like MOND, there has been tons of small contribution to try to prove / disprove / tweak the theory. If it ever comes out as something that holds, it’d be from a lot of people doing the grind.