Case in point, everybody is doing AI research nowadays and NIPS has like 15k submitted papers. But the innovation rate in AI is actually not that much higher than 10 years ago, I would even argue that it is lower. What are all these papers for? They help people build their careers as proofs of work.
I agree that many fields essentially have papers as "proof of work", but not all fields are like that. When I worked as a mechanical engineer, publication was "the icing on the cake" and not "the cake itself". It was a nice capstone you do after you have have completed a project, interacted with your customers, built a prototype, filed a patent application, etc. The "proof of work" was the product, basically, and you can build your career by making good products.
Now that I am working as a scientist, I see that many scientists have a different view of what their "product" is. I have always focused on the product being the science itself --- the theories I develop, the experiments and simulations I conduct, etc. But for many scientists, the product is the papers, because that it what people use to evaluate your career. It does not have to be this way, but we would have to shift towards a better definition of what it means to be a productive scientist.
A typical approach to science is finding your niche and becoming a person known for that thing. You pick something you are interested in, something you are good at, something underexplored, and something close enough to what other people are doing that they can appreciate your work. Then you work on that topic for a number of years and see where you end up in. But you can't do that in AI, because the field is overcrowded.
Exactly. It used to be that way in AI a decade ago. Different subfields used bespoke methods you could specialize in and could take a fairly undisturbed 3-5 years to work on it without constant worries of being scooped and therefore having to rush to publish something half baked to plant flags. Nowadays methods are converging, it's comparatively less useful to be an expert in some narrow application area, since the standard ML methods work quite well for such a broad range of uses (see the bitter lesson). This also means that a broader range of publications are relevant to everyone, you're supposed to be aware of the NLP frontier even if you are a vision researcher etc., you should know about RL developments etc. Due to more streamlined github and huggingface releases, research results are also more available for others to build on, so publishing an incremental iteration on top of a popular method is much easier today than 15 years ago when you first had to implement the paper yourself and needed expertise to avoid traps not mentioned in any paper and is assumed common knowledge.
It may not be a big problem for overall progress, but it makes people much more anxious. I see it on PhD students, many are quite scared of opening arxiv and academic social media, fearing that someone was faster and scooped them.
Lots of labs are working on very similar things, and the labs are less focused on narrow areas, everyone tries to claim broad areas. Meanwhile people have less and less energy to peer review this flood of papers and there's less incentive to do a good job there instead of working on the next paper.
This definitely can't go on forever and there will be a massive reality check in academia (of AI/ML).
It's crazy, most Master's students applying for a PhD position already come with multiple top conference papers, which a few years ago would get you like 2/3 of the way to the PhD, and now it just gets you a foot in the door in applying to start a PhD. And then already Bachelor students are expected to publish to get a good spot in a lab to do their Master thesis or internship. And NeurIPS has a track for high school students to write papers, which - I assume - will boost their applications to start university. This type of hustle has been common in many East Asian countries and is getting globalized.
Its a prestige economy. There are other things too like having worked with someone famous or having interned in a top company.
Makes me wonder, have I turned brilliant or is it quite unimpressive out there?
I’m inclined to even suggest to you that the prestige economy started with truly prestigious research work, of which then the institutions “ordered” as many more of those as they could, hence the industrial levels of output. Not unlike VCs funding anything and everything for the possibility of the few being true businesses.
The problem is that of course everyone wants the glory of finding out some new groundbreaking innovative disruptive scientific discovery. And so excellence is equated with such discoveries. So everything has to be marketed as such. Nobody wants to accept that science is mostly boring, it keeps the flame alive and passes on the torch to the next generation, but there's far less new disruption than it is pretended. But again, a funding agency wants sexy new finding that look flashy in the press, and bonus points if it supports their political agendas. The more careful and humble an individual scientist is, the less they will seem successful. Constantly second guessing your own hypotheses, playing devil's advocate strongly and doing double and triple checks, more detailed experiments, etc. take longer time and have better chance at discovering that really the sexy effect doesn't exist.
> Makes me wonder, have I turned brilliant or is it quite unimpressive out there?
Obviously, it's impossible to say without seeing their work and your work. But for context, there are on the order of tens of thousands of top-tier AI-related papers appearing each year. The majority of these are not super impressive.
But I also have to say, what may seem "just common sense" may look like that just in hindsight, or you may overlook something if you don't know the related history of methods, or maybe you're glossing over something that someone more experienced in the field would highlight as the main "selling point" of that paper. Also, if common sense works well, but nobody did it before, it's still obviously important to know how well it works quantitatively, including detailed analysis of the details.
Funnily enough, the first “professional” coding I ever did was writing up a Stroop test in Visual Basic for a neuro professor, and I recall the effect being undeniably clear. At a personal anecdotal level, I would time myself with matching colors versus non-matching, and even with practice I could not bring my non-matching times down to my matching times.
But grants have been annoying for long. Even decades ago the common wisdom, in fields like biology, was to write grant applications promising things you've already done recently, then spend the money on doing something new. But this doesn't work in super fast moving fields like AI/ML where a grant length is an eternity relative to change in the field.
Also the bureaucrats want something sexy at the end, so academics overpromise, then there's a bitter taste among the funders that the super fancy thing didn't materialize, high impact factor publications can sweeten this bitterness somewhat, as can awards etc. Also a rising tide lifts all boats so just by keeping up with SOTA, they can woo the bureaucrats.
Recycling. Some papers seemed to be near duplicates of prior work by the same academic, with minor modification.
Faddishnes. Papers featuring the latest buzz technologies regardless of whether they were appropriate.
Questionable authorship. Some senior academics would get their name included on publications regardless of whether they had been actively engaged with that project. I saw a few academics get involved in risky and potentially interesting subjects, but they all risked their careers in doing so.
But most of all, there was a dearth of true innovation. The university noticed this and established an Innovation Centre. It quickly became full of second hand projects all frustratingly similar to projects in the US from a few years ago.
Of course there were exceptions, and learning from them was a genuine growth experience fir which I am grateful.
Funding agencies can't evaluate the research itself, so they look at numbers, metrics, impact factors, citations, h-index, publication count etc. They can't simply say "we pay this academic whether he publishes or not because we trust he is still deep in important work when he is not at a work stage to publish" because people will suspect fraud and nepotism and bias, and often the funding is taxpayer money. Not that the metrics prevent that of course. But it seems that way. So metrics it is, so gaming the metrics via Goodhart's law it is.
I don't think it's super bad, but it increases administrative work and busywork overhead on top of the actual research. The progress slows somewhat per person, as the same work has to be salami sliced and marketed in chunks, but there's also way more people in it, but of course most of them produce vary low quality stuff but it's not a big loss because these people would not even have published anything some decades ago, they would just have some teaching professorship and publish every few years perhaps just in their national language. It increases the noise but there are ways to find the signal among it, and academics figure out ways to cut through the noise. It's not great, not super easy, and it pushes a lot of people out who dislike the grind but there are plenty who see it as a relatively good deal to move to a richer country and do this.
Perhaps one expects overgeneralization in consulting blogs though
It's also better than any alternatives, as far as I know. Haven't heard people pushing the idea of restructuring the process, the only exception being that journals shouldn't cost (that much) money and instead institutions should pay for publishing a paper. This wouldn't however change the foundation of the process.
And you are back at square one: peer reviews become the currency used in academic politics. A relatively small group of tenured academics have all the incentives to independently form a fiefdom. Anonymization does not help as everyone knows work and papers of the rest anyway.