Instead of journals getting revenue from subscribers, they charge authors an “Article Processing Charge” (APC) which for ACM is $1450 in 2026 and expected to go up. Authors from lower-middle income countries get a discount. [1]
Authors are often associated with institutions (e.g. universities) who can cover the APC on behalf of the author through a deal with the journal. For the institution, now instead of paying the subscriber fee and publishing for free, they pay a publishing fee and everyone reads for free.
Needless to say I prefer open access since those outside institutions can then read science, but the incentive model is heavily broken, and I'm not sure it's a good price to pay for the reward.
1. Journals want to publish lots of articles, so they are incentivised to provide a better publishing experience to authors (i.e. better tech, post-PDF science, etc) - Good.
2. Journals will stop prioritising quality, which means they will relinquish their "prestige" factor and potentially end the reign of glam-journals - Good.
3. Journals will stop prioritising quality, which means we can move to post-publication peer-review unimpeded - Good.
In CS, this is definitely not the case at all.
If you remove the "quality badge" factor, journals are totally useless. Everyone in my field knows how to use LaTeX, produce a decent-looking PDF and upload it to arXiv. This saves you from paying APC's, has actually better discoverability (everyone checks arXiv as a one-stop shop for papers, almost no one goes to check the dozens of different journals) and much less hassle (no need to fiddle with arcane templates, idiosyncratic paper structures forced by each journal, idiosyncratic submission systems that look straight from the 90s, typesetters that introduce more errors than they fix, etc.).
I am pretty sure that journals, at least in my field, subsist precisely as arbiters of quality, they don't provide any other value at all.
For example, for me to progress in my current job I either need a doctorate or to have published a number of peer-reviewed articles in recognised journals as first author. I have written two IETF RFCs and these count for nothing.
I am not a scientist, I am a software developer. I am not employed as a scientist, I am employed as a software developer. But the rules of the organisation are thus.
Yes, in fact this is mainly what I meant with "quality badge". It's a badge mostly for instutitional bean-counting processes. Fellow scientists don't need it that much, typically we can separate the wheat from the chaff with a very quick skim.
dont worry, leadership will find another metric to turn into a target, after the old metric has stopped working for a decade or two.
What follows is totally offtopic, but to be honest I don't check Semantic Scholar much because I have a grudge with it. Profiles just don't work for authors with accented characters in the name (such as myself), papers get dispersed between multiple automatically-generated profiles. The staff is very helpful and will manually merge profiles for me when asked, but then I publish a new paper and wham, instead of incorporating it into the merged profile the system creates a new one. This has been going on for 6 years (if not more) and still unfixed.
For all the criticism that Google Scholar gets, I highly prefer it because it gets that right. It's extremely annoying when tools give you extra work for committing the sin of not having an Anglo-Saxon name (this is much more common than unaffected people would expect) and just don't seem to care to fix it.
It is the editorial board, i.e. academic peers, not the publisher, that are (?were) the arbiters. As far as I can see, the primary non-degenerate function of journals is to provide a quality control mechanism that is not provided by "publishing" on your own webpage or arxiv.org. If journals really are going to abandon this quality control role (personally I doubt it) then I fail to see their relevance to science and academic discourse at large.
Journals should either become tech companies offering (and charging for) new and exciting ways to present scientific research, or simply stop existing.
Completely off topic, but thanks for creating AudioMulch, I don't use it actively anymore but it totally revolutionized how I approach working with sound!
Journals should absolutely play a role in maintaining quality and curating what they publish.
For discoverability. Someone's trivial finding may be someone else's key to a major breakthrough, but little good it does if it can't be easily found
Not everyone.
Do you know that you can get rejected by arXiv if they think your publication is not worthy of their publication.
It's an open access journal masquerading as pre-print server. There are other much more open pre-print server.
On top of that the chance of finding something as you suggest becomes that much more difficult. Smaller findings get published now in a more controlled scenario and get lost in the stream.
Major journals are a net positive for surfacing important science.
Discovery is a search problem and its pretty clear that we have the technical capacity to solve that problem if there is enough of a signal from wide-spread peer review.
Major journals become those that re-publish and report on the big debates and discoveries of the actually peer-reviewed journals and this would be the work of "journalists".
Non-experts sometimes bring perspectives that gatekeepers are blind to.
1. Open peer-review to anyone interested instead of only select few. HN is an example of this phenomenon but not for novelty specifically.
2. Permit publication of papers that are shorter for results to spread faster. AI papers are a good example of this phenomenon.
At that point why even have a journal, let's just put everything as a Reddit post and be done with it. We will get comment abilities for free.
Maintaining quality standards is a good service, the journal system isn't perfect but its the only real check we have left.
Great question.
> the journal system isn't perfect but its the only real check we have left.
I wish I could agree but Nature et al continually publish bad, attention-grabbing science, while holding back the good science because it threatens the research programmes that gave the editorial board successful careers.
"Isn't perfect" is a massive understatement.
They seem well-positioned to be such arbiters. Who else do you suggest and why are they better?
Nobody can possibly read every article and few have the expertise to decide. There is no reason to think the 'wisdom of the crowds' is reliable - and lots of experience and research showing it is not, and easily manipulated by nonsense. I don't want Reddit or Twitter.
The arbiters are just our colleagues, at the end of the day. The journal is just the organisational mechanism, one of many possible mechanisms.
For example, I follow a weekly reading list (https://superlab.ca) published by a group of motor control labs at Western University. Those people are my arbiters of quality.
I want to continue having arbiters, and I want it to be the same people (broadly speaking). I just don't want them to be organised around journals because journals are toxic and lead to concentrated power over scientific narratives.
That's the first order effect, but you have to look beyond it. If authors have to pony up $1500, they will only do so for journals that have readers. The journals that are able to charge will be those that focus on their readership.
Heck, nobody even bats an eye if that publication is to be presented at a conference with a few thousand bucks in travel costs.
On the other hand predatory journals make a killing from APCs so there is some market for journals with no readers.
Most kids unfortunately did end up paying to publish.
If the tenure process focuses on quality of work, then it should work better.
Publishers have a finite capacity based on the number of credible peer reviewers. In the past, it felt very exploitative as an academic doing peer review for the economic benefit of publishing houses. I'd much rather have "public good" publishers with open access -- at least I feel like the "free" labor is aligned with the desired outcome.
I am certain that that no system is perfect. My belief is that the Closed Access publishers have had free reign for so long that the largest ones abuse the system and competitive models are useful to restore some balance. The model also restricts access to information.
I would argue that one downside to Open Access is that incentives volume over quality (as others have said) but I would judge that on a per publisher basis just as I would any publisher. Closed Access models might also provide publication in areas of research that don't get tons of attention and research money.
I would also argue that there are other problems within research such as lack of reproducible results in many papers that is a far more pressing issue. Just my 2 cents. Thank you for the honest discussion.
Like some escrow account that the universities pay into and the publisher payouts go to whoever best enables their authors to do the most useful work... as determined by the other authors.
An AI or search engine that identified the value of a contribution and paid the author directly from advertising money based on query traffic could be a way to solve this.
There's got to be ways to improve things though.
Found,
> Once your paper has been accepted, we will confirm your eligibility automatically through the eRights system, and you’ll get to choose your Creative Commons license (CC BY or CC BY-NC-ND).
The service they are providing is peer review and applying a reputable quality bar to submissions.
Think of it this way, if you have a good paper why would you publish on Arxiv instead of Nature? And then if you are Nature, why would you throw away this edge to become a free-to-publish (non-revenue-accruing) publication?
That is, unless ACM and Nature have a different approach to organizing peer review, in which case my correction is wrong. But I believe my point stands for many conferences and journals.
A different way to look at this is to question what "old slop" actually means.
The reason not to publish in Nature is that it might take a long time to get everything right in the paper to publish, to the point it takes years to get it read. Publishing fewer results faster spreads the results faster.
For those fields with an existing market, meaning there is more than one high quality journal, the market will provide the right incentives for those publishers.
One hope might be that it incentivises institutions away from the publish or perish mind set and starts to discourage salami slicing and other such practices, allowing researchers to focus on putting out less work of a higher quality, but I suspect the fees would need to be larger to start seeing this sort of change.
For several conferences I have been involved with, the publishers' duties included the princely tasks of nagging authors for copyright forms, counting pages, running some shell scripts over the LaTeX, and nagging about bad margins, improperly capitalized section headers, and captions being incorrectly above figures.
Frankly, in the digital age, the "publishers" are vestigial and subtractive from the Scientific process.
If they did any serious typesetting, they'd be fine with a simple Markdown or e.g. RMarkdown file, BibTeX and/or other standard format bibliography file, and figures meeting certain specifications, but instead, you often get demands for Word files that meet specific text size and margin requirements, or to use LaTeX templates. There are exceptions to this, of course.
And who will curate the best research, especially for people outside your field? I can't follow the discussion in every field.
Journals receive papers for free, peer review is free, the only expenses are hosting a .pdf and maintaining an automated peer review system. I would've understood $14.50 but where does the two orders of magnitude higher number come from?
https://projects.propublica.org/nonprofits/organizations/131...
Just to be clear this is specifically _gold open access_. There are other options like green (author can make article available elsewhere for free) and diamond (gold with no charge).
Here’s the list of current members: https://libraries.acm.org/acmopen/open-participants
Knowing the reality of the Brazilian's public universities, the bureaucracy of the Government and the condition of the students in general, I'm pretty sure we won't have articles from Brazil anymore.
Note the maths becomes substantially worse when you look at poorer countries than brazil.
The only downside is when you will need to publish your paper, in case you can get closer to a university or organisation to help you finance that or choose to publish in another journal.
Good publishing costs money but there are alternatives to the established models. Since 2021 we use the Subscribe to Open (S2O) model where libraries subscribe to journals and at the beginning of each subscription year we check for each journal whether the collected revenues cover our projected costs: if they do we publish that year's content Open Access, otherwise only subscribers have access. So no fees for authors and if libraries put their money where their mouth is then also full OA and thus no barriers to reading. All journals full OA since 2024. Easy.
Good faith question: aside from hosting costs, what costs are there, given the reviewers are unpaid?
Some keep repeating that Diamond OA is superior because publishing is free for authors and everything is immediately OA. And indeed it is, but only if you have someone who is indefinitely throwing money at the journal. If that's not the case then someone else pays, for example universities who pay their staff who decide to dedicate their work time to the journal. Or it's just unpaid labour so someone pays with their time. It's leading to the same sustainability issues that many Open Source projects run into.
> long-term preservation
How is that done beyond using PDF/A? I'm interested for my own files.
> Typesetting is a big item (for us becoming even more due to production of accessible publications), language editing, (meta-)data curation
I'm sure you've considered this idea; how does it work out in reality?: What happens if you push one or more of those items onto the authors - e.g., 'we won't publish your submission without proper typesetting, etc.'? Or is that just not realistic for many/most authors?
The reason this doesn't work in practice is that authors don't always play nicely, not because of bad intentions, but because they don't want to cooperate but because of the realities of life: they don't have the time to study style guidelines in detail, they use their own auxiliary LaTeX macro collection because that's what they're used to, or simply because of oversights. Also, typesetting often includes a whole lot of meticulous things, if you listed them all in a guide sheet, that would be a long list of stuff at a level that's too detailed for authors.
I'm not saying it's impossible for authors to fully follow a publisher's style guide but there's a reason publishers employ full time workers who do nothing else but correct submitted manuscripts. Like many other professions, it's a trained skill.
As a submitter applying to multiple journals with arbitrary formatting requirements, you are often forced to meet arbitrary and irrelevant (visual) style requirements even before you are likely to be published, so of course you keep a base unformatted copy that you modify as needed to satisfy whatever bullshit policies each random journal demands. This wastes everyone's time.
The reason submitters don't "play nicely" is because the publishers' demands ("style guides") are demented here: they should just be asking for unformatted content (besides figures), certainly for submissions, and even for accepted publications: they should actually be doing the work of formatting and typesetting. But instead they force most of this on the submitters, to save costs by extorting the desperation of academics.
Accessibility in PDFs is also very difficult. I'm not sure any publishers are yet meeting PDF/UA-2 requirements for tagged PDFs, which include things like embedding MathML representations of all mathematics so screenreaders can parse the math. LaTeX only supports this experimentally, and few other tools support it at all.
Since this is obviously true, and yet since most journals (with some exceptions) demand you follow tedious formatting requirements or highly restrictive templates, this suggests, in fact, that journals are outsourcing the vast majority of their typesetting and formatting to submitters, and doing only the bare minimum themselves.
I'm calling bullshit. Look at how annoying the template requirements are for authors: https://www.acm.org/publications/authors/submissions, and note the stuff around Word files. Other journals can be much worse.
If any serious typesetting were being done by these journals, simple plaintext, Markdown (or RMarkdown) or minimal basic LaTeX, with, admittedly, figures generated to spec, would be more than enough for typesetters to manage. In fact, if you were doing serious typesetting, you wouldn't want your users doing a bunch of formatting and layout themselves, and would demand more minimal representations of the content only. Instead you have these ridiculous templates. I am not convinced AT ALL.
Do authors submitting to literary agents have to follow such absurd rules? I think not. Can modern blogging tools create beautiful sites with simple Markdown and images? Yes. So why do academic publishers demand so much from authors? IMO because they are barely doing anything at all re: typesetting and formatting and the like.
The authors write up their research results.
The editors organize the review process together with the reviewers and the publishing process together with the publisher.
The reviewers read the papers and write their reviews.
The publishers publish the papers.
Stylesheets are typically provided by the publishers and passed on to the authors early on. The reason is two-fold: for one, the publisher wants to produce a high-quality product and uniformity of layouts and styles is an important factor. But the second reason has to do with everything that happens before the publishers even comes into play: common style-sheets also provide some level of fairness because they make the papers by different authors comparable to some degree, e.g., via the max length of a paper.
On top of that, authors often want to present their research in a specific way, and often have strong opinions about e.g. how their formulas are typeset, what aligns with what else, etc. and typically spend quite a bit of time tweaking their documents to look the way they want it. That is, the authors already have an interest in using something more powerful than Markdown.
But like I wrote in another comment here, in doing so, authors do not always adhere to the style guides provided by the publisher - not necessarily maliciously, but the result is the same. For instance, authors might simply be used to handling whitespace a certain way - because that's how they always do it. But if that clashes with the publisher's guidelines, it's one of the things the publisher has to correct in typesetting.
So, perhaps that's the confusion here also to some degree: the typesetting done by a publisher is in the majority of the cases on a very fine-grained level. A lot of is is simply enforcing the rules that were missed by the authors (with the goal of fairness, comparability, and conformity) and small perfectionist's edits that you might not even notice at a casual glance but that typesetters are trained to spot.
As I said, if this is the case, the vast majority of typesetting and formatting has clearly been outsourced to submitters, and this means the amount of actual typesetting/formatting done by journals can only be minimal compared to in other domains.
EDIT:
> On top of that, authors often want to present their research in a specific way, and often have strong opinions about e.g. how their formulas are typeset, what aligns with what else, etc. and typically spend quite a bit of time tweaking their documents to look the way they want it. That is, the authors already have an interest in using something more powerful than Markdown.
Yes, generally, I don't want to present my formulas and figures in the shitty and limited ways the journal demands, but which would be trivial to present on a website (which is the only way 99.9% of people access articles now anyway). So journal requirements here are usually harmful and generally 20+ years outdated.
This doesn't follow logically, and even though I don't know how it is in other domains, I know for a fact that the amount of typesetting done for a typical CS journal is non-trivial.
> So journal requirements here are usually harmful and generally 20+ years outdated.
I see you have very strong opinions already formed - I don't expect to be able to change them.
Much like the journals that have figure requirements for print, even though the amount of people that have viewed a figure in print in the last 20 years is an order of magnitude less than a rounding error.
Typesetting costs in 2025 are trivial, if you swallow this claim from academic publishers, you are being had:
https://academia.stackexchange.com/a/52009
https://www.lode.de/blog/the-cost-effective-revolution-autom...
https://svpow.com/2015/06/11/how-much-does-typesetting-cost/
In that case, a consistent input format assists with generation of the output formats, and without that, there'd be even more work.
---
That being said, I don't doubt publisher fees exceed their actual costs for this.
I always wonder why there's no universal academic interchange schema; it seems like something XML could have genuinely solved. I suppose the publishers have no incentive to build that, and reduce what they can charge for.
But general typesetting is very obviously a largely solved problem in 2025, regardless of the submission format, so since academic journals have weirdly specific input format requirements that are not demanded in other similar domains, it is clear they are doing dated / junk / minimal typesetting / formatting.
Also see what the costs are anywhere else, typesetting is a triviality:
https://academia.stackexchange.com/a/52009
https://www.lode.de/blog/the-cost-effective-revolution-autom...
https://svpow.com/2015/06/11/how-much-does-typesetting-cost/
https://old.reddit.com/r/publishing/comments/1cdx1jq/author_...
---
I read your links, and I think the most interesting relevant one with good numbers is the svpow.com link.
The StackExchange one says "34%" of their cost is "editorial and production". That includes more than type-setting, so it's not clear what subfraction is pure type-setting, and whether it's overpriced or not.
The Lode one is selling Latex templates, and they even say "Users without LaTeX experience should budget for learning time or technical assistance." It's more of a low-cost self-serve alternative, which probably doesn't include everything a journal does to maintain visual consistency. We can argue that full-service is overpriced, sure, but this is different, like complaining about coffee shops because the vending machine is cheaper.
The Reddit link is about a book author with a pure text novel, possibly the optimal scenario for cheap type-setting.
---
The svpow.com link was interesting, but, it seems like type-setting costs are usually bundled in (possibly to obscure overcharging, sure), so maybe it's better to critique the overall cost of academic publishing instead of trying to break out type-setting.
My $0.02, anyway.
* Access to the ACM Guide to Computing Machinery
* AI-generated article summaries
* Podcast-style summaries of conference sessions
* Advanced search
* Rich article metadata, including download metrics, index terms and citations received
* Bulk citation exports and PDF downloads
The AI-generated article summaries has been getting a lot of discussion in my social circles. They have apparently fed many (all?) papers into some LLM to generate summaries... which is absurd when you consider that practically every article has an abstract as part of its text and submission. These abstract were written by the authors and have been reviewed more than almost any other part of the articles, so they are very unlikely to contain errors. In contrast, multiple of my colleagues have found errors of varying scales in the AI-generated summaries of their own papers — many of which are actually longer than the existing abstracts.In addition, there are apparently AI-generated summaries for articles that were licensed with a non-derivative-works clause, which means the ACM has breached not just the social expectations of using accurate information, but also the legal expectations placed upon them as publishers of these materials.
I think it's interesting that the ACM is positioning these "premium" features as a necessity due to the move to open-access publishing [1], especially when multiple other top-level comments on this post are discussing how open-access can often be more profitable than closed-access publishing.
[0] https://dl.acm.org/premium
[1] The Digital Library homepage (https://dl.acm.org/) features a banner right now that says: "ACM is now Open Access. As part of the Digital Library's transition to Open Access, new features for researchers are available as the Digital Library Premium Edition."
Also AI-generated, presumably.
Monetizing knowledge-work is nearly impossible if you want everyone to be rational about it. You gotta go for irrational customers like university and giant-org contracts, and that will happen here because of institutional inertia.
I'm pleased that the references to other ACM papers do work.
But try to click on this one:
Bainbridge, L. 1983. Ironies of automation. Automatica 19(6): 775-779;
https://pdfs.semanticscholar.org/0713/bb9d9b138e4e0a15406006...
Fail! No way to read the paper without paying or pirating by using scihub (and even if you do get the .pdf via scihub, its references are not hyperlinks). This does not help humanity, it makes us look like morons. FFS, even the music industry was able to figure this out.
Would it be rude to print the link "https://doi.org/10.1016/0005-1098(83)90046-8" but actually link to https://sci-hub.se/10.1016/0005-1098(83)90046-8 when you click it? :-)
The DOI is key, then you can use a browser extension to do it, for example: https://github.com/natir/Redirector_doi_sci-hub
Maybe this is wishful thinking but a proliferation of openly accessible and competing independent publications could correct for a lot of the ills of the Goodhart effect in academic publishing. Market shifts that make this evolutionary pathway feasible and realistic are exiting.
My understanding is that this is at least to some degree in response to the surge of AI generated/assisted papers.
I used to work for a small publisher some years ago, and while this is true to some degree, we spent a lot of effort doing additional formatting or correcting formatting mistakes. For a typical journal publication, this process alone takes weeks if you're aiming at a high-quality publication.
On top of that, there are a lot of small things that you typically don't get if a paper is just put on the author's website, such as e.g. long-term archiving, a DOI, integration with services like dblp, metadata curation, etc.
Now, to what degree these features are an added value to you personally varies from person to person. Some people or even workshops are totally fine with simply publishing the PDFs written by the authors on a website, and there's nothing wrong with that, ymmv.
ACM started this open access effort back in 2020, I don't think that LLM generated papers were on their mind when they started it.
It seems absurd that researchers fret about where to submit their work and are subsequently judged on the impact of said work based in large part on a metric privately controlled by Clarivate Analytics (via Web of Science/Journal Citation Reports).
Clarivate does control it because they tend to have the best citation data, but the formula is simple and could be computed by using data freely accessible in Crossref. Crossref tends to under report forward citations though due to publishers not uniformly depositing data.
Note that older articles have already been open access for a while now:
> April 7, 2022
> ACM has opened the articles published during the first 50 years of its publishing program. These articles, published between 1951 and the end of 2000, are now open and freely available to view and download via the ACM Digital Library.
- https://www.acm.org/articles/bulletins/2022/april/50-years-b...
Elsevier makes over $3 billion dollars with the closed publication model. They force institutions to pay for bundles of journals they do not want. The Institutions often do not supply access to the general public despite the papers being produced with public money (and despite many of the Institutions being funded by public money).
Paying the cost upfront from the grant increases the availability to the public.
At low costs of $2k~$3k per publication[0]. Elsevier closed-access journals will charge you $0 to publish your paper.
>Elsevier makes over $3 billion dollars with the closed publication model.
Elsevier is also[1] moving to APC for their journals because is better business.
>The Institutions often do not supply access to the general public despite the papers being produced with public money
Journals (usually) forbid you of sharing the published (supposedly edited) version of a paper. You're allowed to share the pre-published draft (see arXiv). Institutions could (and some indeed do) supply those drafts on their own.
>Paying the cost upfront from the grant increases the availability to the public.
At the expense of making research more expensive and hence more exclusive. It's money rather quality that matters now. Thus it isn't unsurprising that Frontiers & MDPI, two very known open-access proponent publishers, are also very known to publishing garbage. It's ironic that once was said that any journal asking you for money to publish your paper is predatory, yet nowadays somehow this is considered best practice.
[0]: https://plos.org/fees/ [1]: https://www.elsevier.com/open-access
If researchers cannot pay the APC then PLoS often reduces the fee. Also - half of that grant money is used by the Institution as administrative overhead. An part of that overhead is paying Elsevier for journal access. If you want to decrease the cost of research that may be a better place to start.
I agree that volume often tends to result in garbage but the review is supposed to lessen that. Again that garbage did get funded some how.
I am not pushing PLoS - they are simply a publisher I am familiar with that uses this model.
The garbage thing is really interesting. I'm going to propose another reason for garbage is Academia's reliance on publication as the primary means for giving promotions and judging peoples work. This leads to all kinds of disfunction.
Was it Nobel Prize Winner Peter Higgs that said his University wanted to fire him because he didn't publish frequently enough?
I am a self-funded PhD student and no one paid me for the work that went into my open access paper. As it happens in this case the journal waived the publication fee, so no one paid anyone anything except I suppose the nominal pro-rata portion of my university fees that I paid.
They (or someone) needs to message the mods about it, it looks like they've been shadowbanned since their first comment 6 months ago.
While I do not disagree with this statement, this makes a significant difference for the citizens who do not happen to work in academia. Before open access, the journals would try to charge me $30-50 per article, which is ridiculous, it's a price of a textbook. Since my taxes fund public research in any case, I would prefer to be able to read the papers.
I would also love to be able to watch the talks at academic conferences, which are, to very large extent, paid by the authors, too.
Kidding, i agree $30-50 per article is outrageous.
The entire education system is a racket.
It should be free and open access, no registration, no user tracking, no data collection, no social features, just a simple searchable paper host that serves as official record and access. You'd need a simple payment portal for publishing rights, but fair use and linking to the official public host would allow people to link and discuss elsewhere.
It's not a hard technical problem, it's not expensive. We do things the stupid, difficult, convoluted way, because that's where bad faith actors get to pretend they're providing something of value in return for billions of dollars.
I wonder if we could form a graph that would make a collusion ring intuitively visible (I’m not sure what—between papers, authors, and signings—should be the edges and the nodes, though). Making these relationships explicit should help discover this kind of stuff, right?
Another problem with my idea is that a lot of famous luminaries wouldn’t bother playing the game, or are dead already. But, all we can really do is set up a game for those who’d like to play…
Profit motivated exclusivity under private control resulted in the enshittification vortex of adtech doom we're currently all drowning in. If you want prestige - top ten status in Google search results - you need to play the game they invented. Same goes for all of academia.
People stopped optimizing for good websites and utility and craft and started optimizing for keywords and technicalities and glitches in the matrix that bumped their ranking.
People stopped optimizing for beneficial novel research and started optimizing for topical grants, politically useful subjects, p hacking, and outright making shit up as long as it was valuable to the customers (grant agencies and institutions seeking particular outcomes, etc.)
Google is trash, and scientific publication is a flaming dumpster fire of reproducibility failure, fraud, politically motivated weasel wording nonsense, and profit motivated selective studies on medical topics that benefit pharma and chemical companies and the like.
Scientific publishing is free speech. As such, it shouldn't be under the thumb of institutions or platforms that gatekeep for profit or status or political utility or any of a dozen different incentives that will fatally bias and corrupt the resulting publications.
It's incredibly cheap and easy to host for free. It benefits everyone the most and harms the public the least to do it like that, and if a prestigious platform tries to push narrative bending propaganda, it can be directly and easily contradicted using the same open and public mechanisms. And if it happens in the other direction, with solid, but politically or commercially inconvenient research saying something that isn't appreciated by those with wealth or power, that research can be openly reproduced and replicated, all out in the open.
I agree, but..
>Therefore, the open system is the least bad of the available options.
this does not necessarily follow.
>A journal could still achieve prestige by curating and selecting the best available studies and research
See, this is just the kind of thing that I think will just not work when organized top-down like that. "Oh, we'll just make a prestigious journal by only letting the best papers in" - everyone could say that, but what would induce the authors of the best papers to submit them to your specific journal at all in the first place? Currently it's the fact that it's already prestigious, and this reputation has grown over many years through informal social processes that are very hard to codify.
>Scientific publishing is free speech. As such, it shouldn't be under the thumb of institutions or platforms that gatekeep for profit or status or political utility or any of a dozen different incentives that will fatally bias and corrupt the resulting publications.
Of course I agree, just to be clear I am a great proponent of openly accessible science - just think the prestige thing is an interesting corner case.
In addition to what @tokai said, I think it's also important to keep in mind that before Open Access the journal publishers charged subscription fees. The subscription fees were paid by universities and that was also likely largely taxpayer funded (e.g., using money from overheads charged to grants).
This isn't the golden age we might have hoped for, but open access is actually a desirable outcome even if as usual Capitalism tries to deliver the worst possible version for the highest possible price.
Each time I spent hours searching an appropriate journal for my research. As time goes on, I feel like research is only for very wealthy people.
Publishers have been fighting OA for an incredibly long time. They are not foisting this on people because it’s a new great scheme they’ve come up with, they have been pushed to do it.
______
† As defined in the Berlin Declaration 22 years ago: https://openaccess.mpg.de/Berlin-Declaration
> Making the first 50 years of its publications and related content freely available expresses ACM’s commitment to open access publication and represents another milestone in our transition to full open access within the next five years.
( from https://www.acm.org/articles/bulletins/2022/april/50-years-b... )
I wouldn't have understood that nuance without the context given by your comment, but in my developer mind I analogize "freely available" to a "source available" license that they took on, as a step towards going open access ("free and open source") over time. I'm also happy to see that that transition seems on track as planned.
I haven’t been able to find anything that states otherwise. What changes in January is the policy for new publications.
Or at least they haven’t explicitly announced anything in that vein for post-2000.
In astrophysics we already have a journal like that is gaining traction after several publishers switched to golden open access.
The system when the taxpayer subsidizes enormous profit margins of Elsevier etc while relying on free work by referees is crazy
just publish your stuff in a website... on a blog, on github....
Open access does not mean Creative Commons license (CC-BY, or CC-BY-NC-ND).
Jan 1 2026, all ACM publications will be open access, but not all will be creative commons.
Per an email I received on April 11th, 2025 from Scott Delman:
“Thank you for your email. All ACM published papers in the ACM DL will be made freely available. All articles published after January 1, 2026 will be governed by a Creative Commons license (either CC-BY or CC-BY-NC-ND), but ACM will not be retroactively assigning CC licenses to the entire archive of ~800K ACM published papers.”
This is unfortunate, in my opinion, because a lot of the foundational computer science papers fall into that category.
#FreeAlanTuring
It just took them 30 years :)
The ACM always said it wanted to build bridges with practitioners but paywalled journals aren't the way to do it.
I would be 100% for more green cards or a better guestworker program of some kind, but I've seen so many good people on H-1Bs twisted into knots... Like the time the startup I was working for hired a new HR head and two weeks in treated an H-1B so bad the HR person quit. I wanted to tell this guy "your skills are in demand and you could get a job across the street" but that's wasn't true.
I joined the IEEE Computer Society because it had a policy to not have a policy which I could accept.
But I'm not sure if it is about your IP or the whole country but I guess it the former. Who knows what the firewall god at Cloudflare does.
I also don't understand why anyone would ever want to get a PhD, which is just a manner of exchanging almost free labor for a nearly worthless piece of paper. It's like a participation trophy at this point for people that are not homo economici.
Why do research if you don't publish it? It's like running a farm and letting the food rot in the fields every year, nobody eating it. The value of knowledge is sharing it with others.
In a history of technology and science I read, the author pointed out that likely there have been many discoveries that, because they weren't shared outside the village, are lost to time (including because of a lack of widespread literacy). You might add the arts to that - how many great stories were lost?