To give a bit of additional context here, since the link doesn't have any:
The Firefox code has indeed recently moved from having its canonical home on mercurial at hg.mozilla.org to GitHub. This only affects the code; bugzilla is still being used for issue tracking, phabricator for code review and landing, and our taskcluster system for CI.
In the short term the mercurial servers still exist, and are synced from GitHub. That allows automated systems to transfer to the git backend over time rather than all at once. Mercurial is also still being used for the "try" repository (where you push to run CI on WIP patches), although it's increasingly behind an abstraction layer; that will also migrate later.
For people familiar with the old repos, "mozilla-central" is mapped onto the more standard branch name "main", and "autoland" is a branch called "autoland".
It's also true that it's been possible to contribute to Firefox exclusively using git for a long time, although you had to install the "git cinnabar" extension. The choice between the learning hg and using git+extension was a it of an impediment for many new contributors, who most often knew git and not mercurial. Now that choice is no longer necessary. Glandium, who wrote git cinnabar, wrote extensively at the time this migration was first announced about the history of VCS at Mozilla, and gave a little more context on the reasons for the migration [1].
So in the short term the differences from the point of view of contributors are minimal: using stock git is now the default and expected workflow, but apart from that not much else has changed. There may or may not eventually be support for GitHub-based workflows (i.e. PRs) but that is explicitly not part of this change.
On the backend, once the migration is complete, Mozilla will spend less time hosting its own VCS infrastructure, which turns out to be a significant challenge at the scale, performance and availability needed for such a large project.
If I may - what were the significant scale challenges for self hosted solution?
The obvious generic challenges are availability and security: Firefox has contributors around the globe and if the VCS server goes down then it's hard to get work done (yes, you can work locally, but you can't land patches or ship fixes to users). Firefox is also a pretty high value target, and an attacker with access to the VCS server would be a problem.
To be clear I'm not claiming that there were specific problems related to these things; just that they represent challenges that Mozilla has to deal with when self hosting.
The other obvious problem at scale is performance. With a large repo both read and write performance are concerns. Cloning the repo is the first step that new contributors need to take, and if that's slow then it can be a dealbreaker for many people, especially on less reliable internet. Out hg backend was using replication to help with this [1], but you can see from the link how much complexity that adds.
Firefox has enough contributors that write contention also becomes a problem; for example pushing to the "try" repo (to run local patches through CI) often ended up taking tens of minutes waiting for a lock. This was (recently) mostly hidden from end users by pushing patches through a custom "lando" system that asynchronously queues the actual VCS push rather than blocking the user locally, but that's more of a mitigation than a real solution (lando is still required with the GitHub backend because it becomes the places where custom VCS rules which previously lived directly in the hg server, but which don't map onto GitHub features, are enforced).
[1] https://mozilla-version-control-tools.readthedocs.io/en/late...
It is free and robust, and there is not much bad Microsoft can do to you. Because it is standard git, there is no lockdown. If they make a decision you don't like, migrating is just a git clone. As for the "training copilot" part, it is public, it doesn't change anything that Microsoft hosts the project on their own servers, they can just get the source like anyone else, they probably already do.
Why not Codeberg? I don't know, maybe bandwidth, but if that's standard git, making a mirror on Codeberg should be trivial.
That's why git is awesome. The central repository is just a convention. Technically, there is no difference between the original and the clone. You don't even need to be online to collaborate, as long as you have a way to exchange files.
Definitely I can access the source code. The review tools are not on GitHub. But is it even possible to host my proposed changes elsewhere, not on GitHub? I suppose that the answer is negative, but surprises happen.
This is a relatively theoretical question, but it explores the "what bad Microsoft can do to you" avenue: it can close your GitHub account, likely seriously hampering your ability to contribute.
You submit patches to Phabricator, not to GitHub.
https://firefox-source-docs.mozilla.org/contributing/contrib...
> To submit a patch for review, we use a tool called moz-phab.
That does mean you need an account on Phabricator, but not on GitHub https://moz-conduit.readthedocs.io/en/latest/phabricator-use...
Recently I also got "rate limited" after opening about three web pages.
Microsoft can do something to you, and that is to arbitrarily deny you access after you've built a dependence on it, and then make you jump through hoops to get access back.
People who haven’t used it logged out recently may be surprised to find that they have, for some time, made the site effectively unusable without an account. Doing one search and clicking a couple results gets you temporarily blocked. It’s effectively an account-required website now.
You're really going to make me clone a project locally to do a search. I just end up using google to search github. It's so stupid.
I can't find it now, but sometime in the past week or so I saw something that (IIRC) related to the BBC (?) blocking a ton of badly-behaved obvious scraper traffic that was using Meta (?) user-agents but wasn't coming from the ASNs that Meta uses. The graphs looked like this ended up reducing their sustained traffic load by about 80%.
Items where I'm doubting my recall (since I didn't find anything relevant doing some quick searches) are marked with (?)
The usual way I notice I'm not logged in is by getting blocked after interacting with ~3 different parts of the site within a minute. If I search, click a single repo, and stay in that repo without using search, it seems to go OK, but if I interact with search and then a couple repos, or search again, temp-banned.
This might be a country-dependant thing.
However, it is clearly not correct to say that you were banned from GitHub. It’s like saying “I was banned from Google because I refuse to use computing devices.”
Not really a ban, just self flagellation, which, again, whatever works for you.
Github seems to have no legit need for a user's phone number. Since there's not even a way to tell them to go pound sand, I'd say opting out of disclosing sensitive information they don't need by not signing in/up and equating their unreasonable demand with a ban is respectable.
I mean, obviously you disagree with them being generally reputable, but you must realize that’s not a broad opinion, and they are certainly better at preventing data breaches than the average company that stores phone numbers.
Sincerely though, I hope you get your GDPR request sorted.
Are you talking about Microsoft here? https://en.wikipedia.org/wiki/Microsoft#Controversies
Not mine and it sucks that this means I'm not welcome as FireFox contributor anymore unless I move countries just to register a monthly contract for a dedicated GitHub-accepted SIM card.
Once you trigger phone-number verification requirement your account is globally shadowbanned and support blocked pending SMS code verification. Aside from the privacy issue it's completely blocking people in the several countries (beyond the ones offially totally banned due to sanctions) to which GitHub won't even try to SMS/call.
Remember that registering a second account would be violating GitHub ToS.
Far less than these?
https://news.ycombinator.com/item?id=40592789
https://news.ycombinator.com/item?id=12305598
https://en.wikipedia.org/wiki/Criticism_of_Microsoft#Privacy...
This is unlikely.
I've been gone for a few years now and have no insight into this decision, so take anything I say with a grain of salt. Having said that, I think that, for better or worse, GitHub is probably the best location simply because it provides the lowest barrier to entry for new contributors.
I know that's spicy enough to trigger dozens of keyboard warriors hitting the reply button, but every little thing that deviates from "the norm" (for better or for worse, GitHub is that) causes a drop-off in people willing to contribute. There are still people out there, for example, who refuse to create an account on bugzilla.mozilla.org (not that this move to GitHub changes that).
Given the post above, issues regarding self-hosting were at least part of the reason for the switch so a new self-hosted arrangement is unlikely to have been considered at all.
I don't know what the state of play is right now, but non-self-hosted GitLab has had some notable performance issues (and, less often IIRC, availability issues) in the past. This would be a concern for a popular project with many contributors, especially one with a codebase as large as Firefox.
Of course Mozilla is free to make their own choices. But this choice will be read as the latest alarm bell for many already questioning the spirit of Mozilla management.
I used a GitLab + GitLab Runner (docker) pipeline for my Ph.D. project which did some verification after every push (since the code was scientific), and even that took 10 minutes to complete even if it was pretty basic. Debian's some packages need more than three hours in their own CI/CD pipeline.
Something like Mozilla Firefox, which is tested against regressions, performance, etc. (see https://www.arewefastyet.com) needs serious infrastructure and compute time to build in n different configurations (stable / testing / nightly + all the operating systems it supports) and then test at that scale. This needs essentially a server farm, to complete in reasonable time.
An infrastructure of that size needs at least two competent people to keep it connected to all relevant cogs and running at full performance, too.
So yes, it's a significant effort.
Firefox does indeed have a large CI system and ends up running thousands of jobs on each push to main (formerly mozilla-central), covering builds, linting, multiple testsuites, performance testing, etc. all across multiple platforms and configurations. In addition there are "try" pushes for work in progress patches, and various other kinds of non-CI tasks (e.g. fuzzing). That is all run on our taskcluster system and I don't believe there are any plans to change that.
Your guess is wrong as Firefox doesn't use GitHub for any of that, and AFAIK there are no plans to either.
The blog post linked in the top comment goes in to this in some detail, but in brief: git log, clone, diff, showing files, blame, etc. is CPU expensive. You can see this locally on large repo if you try something like "git log path/to/dir".
Add to this all the standard requirements of running any server that needs to be 1) fast, and 2) highly available.
And why bother when there's a free service available for you?
Given the frequency I see comments on this site about Mozilla trying to do far too much rather than just focusing their efforts on core stuff like Firefox, I'm honestly a bit surprised that there aren't more people agreeing with this decision. Even with the other issues I have with Mozilla lately (like the whole debacle over the privacy policy changes and the extremely bizarre follow-up about what the definition of "selling user data" is), I don't see it as hypocritical to use GitHub while maintaining a stance that open solutions are better than closed ones because I think trying to make an open browser in the current era is a large and complicated goal for it to be worth it to set a high bar for taking on additional fights. Insisting on spending effort on maintaining their own version control servers feels like a effort that they don't need to be taking on right now, and I'd much rather than Mozilla pick their battles carefully like this more often than less. Trying to fight for more open source hosting at this point is a large enough battle that maybe it would make more sense for a separate organization focused on that to be leading the front in that regard; providing an alternative to Chrome is a big enough struggle that it's not crazy for them to decide that GitHub's dominance has to be someone else's problem.
I would love to see Mozilla moving to Codeberg.org (though I’d ask if they’re okay with it first) or something like that. Using GitHub is okay-ish? Personally, I frown upon it, but again I agree – it’s not the most important issue right now.
I'm not claiming that my comment was 100% accurate, but they plan to move some of the CI to GitHub, at least.
Really? I've seen no indication of that anywhere, and I'd be amazed if they did.
They're not using github PRs, and github actions really fights against other development workflows... not to mention they already have invested a lot in TaskCluster, and specialized it to their needs.
Where are you getting that from?
Grim.
The best reason to be using github at all is to maximize the portion of your users who are comfortable submitting bug reports, as they already have an account and are familiar with how the platform works (due to network effects.) Projects which host code on github but chose not to take bug reports there are effectively gate keeping bug submission, by asking their users to jump through the hoops of finding the site, signing up for it, and learning to use a new interface. I've done this before, with Bugzilla and Firefox, to submit a bug report for an accessibility bug on MacOS and it was a pain in the ass that I put off for a long time before becoming annoyed enough to go through the process. (End result: the bug was confirmed but never fixed..)
That said, there are also other teams and projects who do use GitHub for issue tracking. However the closer to Firefox/Gecko you are the harder this gets. For example it's hard to cross-reference GitHub issues with Bugzilla issues, or vice versa. I've seen people try to build two-way sync between GitHub and Bugzilla, but there are quite considerable technical challenges in trying to make that kind of cross-system replication work well.
However your point that GitHub makes issue submission easier for people who aren't deeply embedded in the project is a good one. I'm directly involved with webcompat.com, which aims to collect reports of broken sites from end users. It's using a GitHub issue tracker as the backend; allowing developers to directly report through GitHub, and a web-form frontend so that people without even a GitHub account can still submit reports (as you can imagine quite some effort is required here to ensure that it's not overwhelmed by spam). So finding ways to enable users to report issues is something we care about.
However, even in the webcompat.com case where collecting issues from people outside the project is the most important concern, we've taken to moving confirmed reports into bugzilla, so that they can be cross-referenced with the corresponding platform bugs, more easily used as inputs to prioritization, etc. That single source of truth for all bugs turns out to be very useful for process reasons as well as technical ones.
So — (again) without being any kind of decision maker here — I think it's very unlikely that Firefox will move entirely to GitHub issues in the foreseeable future; it's just too challenging given the history and requirements. Having some kind of one-way sync from GitHub to Bugzilla seems like a more tractable approach from an engineering point of view, but even there it's likely that there are non-trivial costs and tradeoffs involved.
> are effectively gate keeping bug submission
Of course this could be a benefit… Have you seen the quality of bug reports coming from some people, even other devs? :-)
It's really not that hard to sort through user bug reports, find and categorize the ones that are actionable and respond with boilerplate requests for more information to the rest. It's not super enjoyable, it's work, but it's absolutely manageable and devs need to keep some perspective when they complain about it. I think maybe a mandatory part of every CS education should be an internship in messy and difficult manual labor so that devs have some real context about what it means for a job to be unpleasant.
Nope, at least not this dev.
I want to take bug reports from people who can actually report something useful (not “something somewhere aint working” or “is the system OK?”), use their brain just slightly when making the report (if you got an error, it perhaps makes sense to include that message in the report, especially when you get a screen that explicitly states “please include this information in any bug reports”), and can read and pay attention to your responses when you request more information (actually answering the questions, all of them, not just one of them that they think is most relevant, or something different that they think is relevant instead) and who don't get offended when they respond to a further request for the required information with “this is getting urgent now!” and I reply with “then it is getting urgent that you send the information that I've requested twice now”¹.
> Devs want to only take bug reports from other devs
Furthermore, I've had terrible reports from devs and other technical types. Some non-technical end users have in the past sent me far better reports than some devs seem capable of. This is particularly galling because they then complain about how bad end user reports/requests are… I don't mind it from a fresh junior, but anyone else in our line of work should know better.
> It's really not that hard to sort through user bug reports…
It also isn't hard for people to properly describe the issue they are experiencing. It would be nice to be met half way. :)
TBH a lot of my irritation comes from the industry my employer operates in. While I try to stay away from the money and contracts side even more than I try to stay away from being end-user facing, I know that they often request our fees be itemised, and then expect a reduction for the bit marked “first line support” or similar because “our people will triage problems from our users and collate the details”, but their idea of “triage & collate” is just forwarding every email they get to support@ourdomain.tld… This narrow world view might not be relevant to a large public project.
> internship in messy and difficult manual labor so that devs have some real context about what it means for a job to be unpleasant
Younger me worked retail in a theme park, and did warehouse work, and had friends who managed a farm³, I have a fair idea what a hard day of work is.
----
[1] Actually, this no longer happens. My employer is bright enough that there is a buffer between me and client-facing tasks, except occasionally when something properly technical² needs discussing between their tech people and ours.
[2] Though “properly technical” can sometimes mean explaining how key-based auth works for SSH, to someone with a grandiose job title like “Infrastructure Architect”!
[3] Now that is a multi-faceted set of physical and mental complications which make my life, and those of people sending bad bug reports and change requests, look particularly easy.
GitHub issues is terrible compared to Mozilla's Bugzilla instance. It's not even close.
this does not mean that reporting more bugs would result in noticeable improvements, as likely there are already too many reported bugs to process them
at least that is my impression based on fate of my bug reports
I think you can dislike the general move to a service like GitHub instead of GitLab (or something else). But I think we all benefit from the fact that Firefox's development continues and that we have a competing engine on the market.
Both patches have been ignored thus far. That's okay, I understand limited resources etc. etc. Will they ever be merged? I don't know. Maybe not.
I'm okay with all of this, it's not a complaint. It's how open source works sometimes. But it also means all that time I spent figuring out the contribution process has been a waste. Time I could have spent on more/other patches.
So yeah, there's that.
It's certainly true that making the bar higher will reduce low-quality contributions, because it will reduce ALL contributions.
(aside: FreeBSD does accept patches over GitHub, but it also somewhat discourages that and the last time I did that it also took a long time for it to get reviewed, although not as long as now)
There's no easy solution. Much like the recent curl security kerfuffle, the signal:noise ratio is important and hard to maintain.
Email is simple. It's just text, there's no weird javascript or html or lag. I don't have to open X11. I can just open mutt and read or write. I can type "git send-email". It's all open source, so I can read the code to understand it, and write scripting around it. It runs on any computer with ease. Even on a slow connection, it's quite speedy.
I totally agree with you about Phabricator though.
You can do nearly everything the website does entirely in the terminal.
I have some unconventional workflows. And I try not to bother anyone else with it, especially in a volunteer driven open source context. It would be selfish to do otherwise.
To be honest based on what you've written here, keeping you out of my projects sounds like a good thing. What a bunch of piss and vinegar over how other people are choosing to work in a way that works for them.
Every contributor is valuable, it's in the name, the definition of "contribute".
Any bar to entry is bad, it certainly never is the solution to a different problem (not being able to manage all contributions). If anything, in the longer run, it will only make it worse.
Now, to be clear, while I do think GitHub is currently the "solution" to lower barriers, allow more people to contribute and as such improve your Open Source Project, the fact this is so, is a different and other problem - there isn't any good alternative to Github (with broad definitions of "good") why is that and what can we do to fix that, if at all?
In practice, if you get dozens of PRs from people who clearly did it to bolster up their CV, because their professor asked them or something like that, it just takes a toll. It's more effort than writing the same code yourself. Of course I love to mentor people, if I have the capacity. But a good chunk of the GitHub contributions I've worked on were pretty careless, not even tested, that kind of thing. I haven't done the maintainer job in a while, I'm pretty terrified by the idea of what effect the advent of vibe coding had on PR quality.
I feel pretty smug the way I'm talking about "PR quality", but if the volume of PRs that take a lot of effort to review and merge is high enough, it can be pretty daunting. From a maintainer perspective, the best thing to have are thoughtful people that genuinely use and like the software and want to make it better with a few contributions. That is unfortunately, in my experience, not the most common case, especially on GitHub.
But I just don't see how GitHub or a PR-style workflow relates. Like I said in my own reply: I think it's just because you'll receive less contributions overall. That's a completely fair and reasonable trade-off to make, as long as you realise that is the trade-off you're making.
Proposed contributions can in fact have negative value, if the contributor implements some feature or bug fix in a way that makes it more difficult to maintain in the long term or introduces bugs in other code.
And even if such contribution is ultimately rejected, someone knowledgeable has to spend time and effort reviewing such code first - time and effort that could have been spend on another, more useful PR.
For one, it's semantic: It's only a contribution if it adds value to a project.
What you probably mean is that "not everything handed to us is a contribution". And that's valid: There will be a lot of issues, code, discussions, ideas, and what more that substract, or have negative value. One can call this "spam".
So, the problem to solve, is to avoid the "spam" and allow the contributions. Or, if you disagree with the semantics, avoid the "negative value contributions" and "allow the positive value contributions".
A part of that solution is technical: filters, bots, tools, CI/CD, etc. Many of which github doesn't offer, BTW. A big part is social and process: guidelines, expectations, codes-of-conduct, etc. I've worked in some Open Source projects where the barriers to entry where really high, with endorsements, red-tape, sign-offs, wavers, proof-of-conducts etc. And a large part is simply inevitable "resources". It takes resources to manage the incoming stuff, enforce the above, communicate it, forever, etc.
If someone isn't willing to commit these resources, or cannot, then, ultimately, the right choice to make is to simply not allow contributions - it can still be open source, just won't take input. Like e.g. sqlite.
Quite obviously, any incidental friction makes this ever so slightly harder or less likely. Good contributions don't necessarily or only come from people who are already determined from the get go. Many might just want to dabble at first, or they are just casually browsing and see something that catches their attention.
Every projects needs some form of gatekeeping at some level. But it's unclear to me whether the solution is to avoid platforms with high visibility and tools that are very common and familiar. You probably need a more sophisticated and granular filter than that.
You can easily craft an email for that. No need to create a full PR.
It's that sense of superiority that pisses me off.
Many maintainers condescendingly reply "contributions welcome" in response to user complaints. People like that had better accept whatever they get. They could have easily done it themselves in all their "high quality" ways. They could have said "I don't have time for this" or even "I don't want to work on this". No, they went and challenged people to contribute instead. Then when they get what they wanted they suddenly decide they don't want it anymore? Bullshit.
You're making the assumption that these are "high quality" projects, that someone poured their very soul into every single line of code in the repository. Chances are it's just someone else's own low effort implementation. Maybe someone else's hobby project. Maybe it's some legacy stuff that's too useful to delete but too complex to fully rewrite. When you dive in, you discover that "doing it properly" very well means putting way too much effort into paying off the technical debts of others. So who's signing up to do that for ungrateful maintainers for free? Who wants to risk doing all that work only to end up ignored and rejected? Lol.
Just slap things together until they work. As long as your problem's fixed, it's fine. It's not your baby you're taking care of. They should be grateful you even sent the patches in. If they don't like it, just keep your commits and rebase, maybe make a custom package that overrides the official one from the Linux distribution. No need to worry about it, after all your version's fixed and theirs isn't. Best part is this tends to get these maintainers to wake up and "properly" implement things on their side... Which is exactly what users wanted in the first place! Wow!
FOSS maintainers are not a unified mind. The people who go "contributions welcome" and "#hacktoberfest" are somewhere near one end of the spectrum, and the folks dealing with low-effort contributions are somewhere near the other end of the spectrum.
Good maintainers may be firm but they are always nice and grateful, and they treat people as their equals. They don't beg others for their time and effort. If they do, they don't gratuitously shit on people when they get the results. They work with contributors in order to get their work reviewed, revised and merged. They might even just merge it as-is, it can always be refactored afterwards.
That's hard to do and that's why doing it makes them good maintainers. Telling people their "contributions are welcome" only to not welcome their contributions when they do come is the real "low effort".
Thank you for a clear and concise illustration of why some contributions are really not welcome.
Just about the only thing I will agree with you on is that projects should indeed make it clear what the bar for the proper contribution is. This doesn't mean never saying "contributions are welcome", if they are indeed welcome - it's still the expectation for whoever is contributing to do the bare minimum to locate those requirements (e.g. by actually, you know, reading CONTRIBUTING.md in the root of the repo before opening a PR - which many people do not.)
Dismissing users making feature requests and reporting bugs with a "PRs welcome" cliche is quite disrespectful and very much a sign of a superior attitude.
no, I am not obligated to merge badly written PRs introducing bugs just because I had no time to implement the feature myself
Diversity, here too, is of crucial importance. It's why some Open Source software has sublime documentation and impeccible translations, while the other is technically perfect but undecipherable. It's why some Open Source software has cute logos or appeals to professionals, while the other remains this hobby-project that no-one ever takes serious despite its' technical brilliance.
No. I definitely seen people who created multitude of misleading bug reports, flood of stupid feature requests. I personally did a bit of both.
There are people who do both repetitively, fill issue reports without filling requested fields. Or open issue again when their previous report was closed.
I got once bug report where someone was ranting that app is breaking data. Turned out (after wasting my time on investigating it) that user broke data on their own with different software, through its misuse.
There were PRs adding backdoors. This is not a valuable contribution.
There were PRs done to foment useless harmful political mess.
Some people pretend to be multiple people and argue with themselves in pull requests or issues (using multiple accounts or in more bizarre cases using one). Or try to be listed multiple times as contributor.
Some people try to sneak in some intentionally harmful content one way or another.
Some contributors are NOT valuable. Some should be banned or educated (see https://www.chiark.greenend.org.uk/~sgtatham/bugs.html ).
Fighting spam isn't done by using unfamiliar tech, but by actually fighting the spam.
With good contributor guidelines, workflows, filters, etc.
Contributions that don't adhere to the guidelines, or cannot fit in the workflow can be dismissed or handed back.
Two random examples of things I came across in PRS recently:
"Sorry, this isn't on our roadmap and we only work on issues related to the roadmap as per the CONTRIBUTION-GUIDELINES.md and the ROADMAP.md"
"Before we can consider your work, please ensure all CI/CD passes, and the coding style is according to our guidelines. Once you have fixed this, please re-open this ticket"
That is fine, a solved problem.
Using high barrier tech won't keep intentionally harmful contributions away. It won't prevent political mess or flamewars. It won't keep ranters away. It won't help with contributors feelings of rejection and so on. Good review procedures with enough resources, help prevent harmful changes. Guidelines and codes of conduct and resources and tech to enforce, help against rants, bullying or flamewars, not "hg vs git". Good up-front communication on expectation is the solution to people demanding or making changes that can never be accepted.
For projects that I'd be interested in being a long-term contributor to, this is obviously different, but you don't become a long-term contributor without first dealing with the short-term, and if you make that experience a pain, I'm unlikely to stick around.
A big part of this is the friction in signing up; I hope federated forges become more of a thing, and I can carry my identity around and start using alternate forges without having to store yet another password in my password manager.
"Friction in signing up" being a big part for you is also weird, considering basically all free software GitHub alternatives (Gitea, GitLab, Forgejo) support SSO via GitHub.
I just signed out and started the signup flow. It allows me to use an email on my own domain, and I got as far as verifying my email before I canceled the flow, and there hadn't been any requirement for phone number of Microsoft account yet.
Contributors who can't use GitHub because either 1) they are fresh and can't activate a new account 2) their old grandfathered account is no longer usable or 3) their old account id doxxed and they can no longer safely contribute under the old identity.
Once you trigger phone-number verification requirement your account is globally shadowbanned and support blocked pending SMS code verification. Aside from the privacy issue it's completely blocking people in countries to which GitHub won't even try to SMS/call.
Remember that registering a second account would be violating GitHub ToS.
Not to mention the AI-generated security "issues" that are reported against curl, for example, suggests there can indeed be negative value for reports, and contributions.
I don't think this is the place for a debate about the overall utility of open source.
Alternatives to github
We lament Google's browser engine monopoly, but putting the vast majority of open source projects on github is just the expected course to take. I guess we'll repeat history once microsoft decides to set in the enshittification, maybe one day mobile OSes replace Windows and they're strapped for cash, who knows, but it's a centralised closed system owned by a corporation that absolutely adores FOSS
I don't mind any particular project (such as this one) being in Github and I can understand that Mozilla chooses the easy path, they've got bigger problems after all, but it's not like there are no concerns with everyone and everything moving to github
GitLab? It was awful. Slow, and paying for that kind of experience felt like a bad joke. It's much better now but it was borderline unusable back in the day.
Or SourceForge, before Git was mainstream? Also terrible.
GitHub succeeded because it quickly established itself as a decent way to host Git - not because it was exceptional, but because the competition had abysmal UX.
Unlike other lock-in-prone services, moving a Git project is trivial. If GitHub loses its advantages due to enshittification, you just move. Case in point: Mozilla hopping on and off GitHub, as this article shows.
not really
just moving issue tracker and discussions is highly annoying
trying to get your users to move is likely hard and you will lose many
still, may be easy in comparison
A lot more contributions on GH, but the majority of them ignored guidelines and/or had low code quality and attention to detail. Just my anecdotal experience of course.
* contributors need to start somewhere, so even broken PRs can lead to having a valuable contributor if you're able to guide them.
no.
Somehow I think you're holding the difficulty scale backwards!
Being a good coder has absolutely no correlation to being good at using Mercurial.
No, but being a good coder is strongly anti-correlated with being unable or unwilling to figure out Mercurial.
I struggled to understand how the two interacted with each other, and I didn't know how to 'update my branch/pr' and I eventually just gave up.
* "the Open Source Project does not, and does not seek to, generate profit from the sale or licensing of the Open Source Software to which the Open Source Project relates, or the sale of any services related to such Open Source Software;"
* "The Open Source Project agrees not to (nor to authorize any third party to): ... (b) modify or create any derivative works of the GitLab Software ... (d) copy ... the GitLab Software"
That last part is especially problematic for everyone: in order to use GitLab.com for a FOSS project you have to renounce your right to modify (or authorize others to modify) or to copy the FOSS version of GitLab. This might have just been lawyers adding boilerplate without thinking it through, but that in itself is evidence of a major problem at GitLab.
So, GitLab is out. Aside from GitLab Mozilla could have chosen maybe Codeberg, but with the entire point being to remove barriers to new contributors it makes sense to go with the option that almost all such possible contributors are already on.
[0] https://handbook.gitlab.com/handbook/legal/opensource-agreem...
Their docs was also a mess back then and made me recompile everything even if it wasnt needed.
https://github.com/torvalds/linux
// EDIT: Source: https://news.ycombinator.com/item?id=43970574
https://github.com/mozilla-firefox/firefox/blob/main/.github...
I get it from GitHub’s perspective, it’s a nudge to get people to accept the core premise of ”social coding” and encouraging user pressure for mirrored projects to accept GitHub as a contribution entrypoint. I’m impressed by their successes and would attribute some of that to forced socialization practices such as not allowing PRs to be disabled. I’ve grown to dislike it and become disillusioned by GitHub over the course of a long time, but I’m in awe of how well it has worked for them.
Now, both the desktop and the mobile version will be on Github, and the "issues" will stay on Bugzilla.
This will take advantage of both GitHub's good search and source browsing and Git's familiar system.
As a former Firefox and Thunderbird contributor, I have to say that I used local search instead of trying to find something on the mozilla-central website.
Of course, when you're actively developing software, you search inside your IDE, but allowing to find things easily on the website makes it more welcoming for potential new contributors.
On the contrary, I find searchfox to be the best code navigation tool I used. It has nice cross-language navigation features (like jumping from .webidl interface definition to c++ implementation), it has always-on blame (with more features too) and despite that it's really fast and feels extremely lightweight compared to GitHub interface. I really wish I had this with more projects, and I'll be sad if it ever dies.
Then MXR got replaced by DXR, itself replaced in 2020 by Searchfox (introduced in 2016).
The source browsing has detoriated severely relatively recently IME, to the point where i can't be called "good" anymore.
It now loads asynchronously (requiring js) and lazily, randomly breaks on shaky connections and in-page search is broken.
The recent issues/PRs revamp is also a pretty major step back. Try searching in PRs with all uBlock Origin lists enabled.
EDIT: skimming these comments, I like how none of the top comments are talking about the bigger story here which is the move away from mercurial to git and instead everyone is focusing on github itself. This has essentially sealed hg away to obscurity forever. Do people not realise git is a program that runs on your computer and github is just a service that uses git? May be this is an old man gripe at this point but I'm surprised at the lack of technical discussion around this.
To be frank, I know of no other major project that used hg. In fact, I think firefox was how I learned about it in the first place many years ago.
Here's a quick example: when I create a Mercurial repository Mercurial doesn't say anything, while Git yells at me that it's using "master" as its branch name but I can change it with a cryptic command. After a first commit for a file Mercurial once again doesn't say anything, while Git gives me three lines of information including the permissions for the file I just added. Editing and committing a file in Mercurial with "hg commit" yields (again) nothing, while typing "git commit" in Git let's me know that it knows there's a modification but it won't go through until I "stage my change for commit".
Now, imagine you're a new user. Mercurial just did what I asked, and it even guessed that "hg commit" should mean "commit everything that's been modified". Git, on the other hand, has yelled at me about default branch names (what's a branch?!), file permissions, and bickered about me not staging my commit (what's a stage?!!). They both did the same thing but, for a new user, Mercurial did it in a friendlier way.
Trying out hg for the first time - "hg init; echo hello>world; hg commit" prints a "nothing changed" and I have no clue how to get it to commit my file! Whereas git says 'use "git add <file>..."', and, as that's already required for starting tracking a file in both hg and git, it's not entirely unreasonable that you'll need to do "add" upon modifications too.
So in hg you have to explicitly think about file tracking and get changes for free, whereas in git you have to explicitly think about changes and get tracking for free. Obviously I'm biased, but I think "I need to tell git what changes I want committed" is a nicer model than "I need to tell hg when it should realize a file has started existing"; the former is pretty uniformly annoying, whereas I imagine the latter quite often results in adding a file, forgetting to "hg add" it, and making a bunch of commits with changes in other files as the new file is intergrated, but never actually committing the new file itself, with zero warnings.
Git's staging/index, messy as it is (and with some utterly horrible naming), is extremely powerful, and I wouldn't accept any VCS without a sane simple equivalent. Extremely do not like that "hg commit -i", adding some parts manually, and deciding that I actually need to do something else before committing, loses all the interactive deciding I've done (maybe there's a way around that, but --help and "man hg" have zero useful info on interactive mode, not even what all the different (single-char..) actions are; granted, I don't really understand "git add -i" much either, and just use a GUI when necessary). In my git workflow I basically always have some changes that I won't want to commit in the next commit.
For a more time-appropriate critique, this post [1] from 2012 gives an overview of what working with Git felt like at the time when git was being popularized as an alternative to Subversion (including a frequent comment of "use Mercurial instead!"). It's also worth noting that git's error messages have become more helpful since - while the documentation for git-rebase used to be "Forward-port local commits to the updated upstream head", it now reads "Reapply commits on top of another base tip".
[1] https://stevebennett.me/2012/02/24/10-things-i-hate-about-gi...
Git certainly isn't anywhere close to the prettiest thing for ease-of-learning (and indeed used to be even worse), but Mercurial didn't seem particularly good either. Really for the common uses the difference is just needing to do a "git add ." before every commit, vs a "hg add ." before some.
All of my git usage has been on projects with ≤2 devs (including me; technically excluding a few largely-one-off OSS contributions of course), but I still use a good amount of local temp branches / stashes / rebasing to organize things quite often (but also have some projects where all I've ever done is "git add .; git commit -m whatever").
I use it for visibility and ease, that's all. Otherwise I personally dislike it.
GitHub also has a lot of features and authentication scopes tied to the whole org, which is pretty risky for an org as large as Mozilla.
Unfortunately often the cleaner option is to create a separate org, which is a pain to use (e.g. you log in to each separately, even if they share the same SSO, PATs have to be authorised on each one separately, etc).
In Gitlab, you would have had one instance or org for Mozilla, and a namespace for Firefox, another one for other stuff, etc.
It's like AWS accounts vs GCP projects. Yeah, there are ways around the organisational limitations, but the UX is still leaky.
Now it has "main" and "autoland", what are they? Which one is the equivalent of mozilla-central before?
The "new" git default branch name is 'main' and 'autoland' existed before next to 'mozilla-central' and is the one where commits usually appear first.
Commits land in autoland and get backed out if they cause test failures. That's merged to main ~twice per day when CI is happy
I've mostly encountered these branches/repos when checking commits linked to Bugzilla tickets, and I don't recall seeing "autoland" show up too much in those cases.
But I think hg support is going away. We hg enthusiasts at Mozilla are mostly fleeing to Jujutsu.
Hard to believe it's been 27 years. I remember when it was still in beta, and how exciting it was to have an open source alternative to Internet Explorer.
Good times!
On the other hand, the plethora of different self-hosted platforms with limited feature sets is a huge pain. Just finding the repo is often a frustrating exercise, and then trying to view, or worse, search the code without checking it out is often even more frustrating or straight out impossible.
Surely most open source projects have a link to their source code? Whether it's github, gitlab, sourcehut, or anything else?
But it’s a lot of work to prevent abuse, especially for resource intensive features when supporting unsigned-in use cases.
https://wiki.mozilla.org/ReleaseEngineering/DisposableProjec...
Fun to get a glimpse into someone's thought process while they were working.
Everything surrounding code: issues, CICD, etc, is obviously another story. But it's not a story that is answered by distributed git either. (though I would love a good issue tracking system that is done entirely inside git)
> Everything surrounding code: issues, CICD, etc, is obviously another story. But it's not a story that is answered by distributed git either. (though I would love a good issue tracking system that is done entirely inside git)
There is https://github.com/git-bug/git-bug - would love if people started o use it, even in a read only way: use github issues normally, but also have a bot that saves all coments to git-bug, so that i can read issues without an internet connection. Then, at a later date, make it so that people that make issues on git-bug also gets the issue posted on github, making a two way bridge.
Then, optionally, at a later stage when almost everyone migrated to git-bug, make the github issues a read only mirror of the git-bug issues. Probably not worth it: you lose drive-by comments from newcomers (that already have a github account but probably never heard of git-bug), raising the friction to report bugs
The literal project we are discussing is just code. It's literally just code. It doesn't have issues, PRs are disabled as much as they can be (by a GitHub action that automatically closes all PRs with a note that code should be submitted elsewhere), and all "other stuff" is disabled.
Some big repos or organizations might be able to pull this off, but good luck having a small project and then directing users to go through all of those hoops to submit issues somewhere else, open PRs somewhere else, etc.
https://github.com/git-bug/git-bug/blob/master/doc/usage/thi...
I have not tried it.
You could, but generally people can’t. They learn a set of narrow workflows and never explore beyond. GitHub use translates into GitLab use, but not into general git use workout a central repository.
> Everything surrounding code: issues, CICD, etc, is obviously another story. But it's not a story that is answered by distributed git either. (though I would love a good issue tracking system that is done entirely inside git)
Radicle offers one. CLI-based, too.
And tbh, that's how it should be for a version control system. Before git with its byzantine workflows and a thousand ways to do the same thing, version control (e.g. svn) was a thing that's just humming along invisibly in the background, something that you never had to 'learn' or even think about, much like the filesystem.
I don't need to know how a filesystem works internally to be able to use it.
And having a centralized store and history helps a lot to keep a version control system conceptually simple.
In git, working on your own branch is essential to not step on other people's feet and to get a clean history on a single main/dev branch (and tbf, git makes this easy for devs and text files). With a centralized version control system, both problems don't even exist in the first place.
When we did game development with a team of about 100 peeps (about 80 of those non-devs, and about 99% of the data under version control being in binary files) we had a very simple rule:
(1) do an update in the morning when you come to work, and (2) in the evening before you leave do a commit.
Everybody was working on the main branch all the time. The only times this broke was when the SVN server in the corner was running full and we either had to delete chunks of history (also very simple with svn), or get more memory and a bigger hard drive for the server.
Subversion also isn't some thing humming along invisibly in the background, it has its own quirks that you need to learn or you'll get stung.
Tbh, I really wonder where the bad reputation of svn is coming from. Git does some things better, especially for 'programmer-centric teams'. But it also does many things worse, especially in projects where the majority of data is large binary files (like in game development) - and it's not like git is any good either when it comes to merging binary data.
We used TortoiseSVN as UI which worked well both for devs and non-devs.
With this sort of setup, git would break down completely if it weren't for awkward hacks like git-lfs (which comes with its own share of problems).
The point is you CAN. Joe can in theory do it, and Steve can make an alternative piece of software to do it for Joe. In most other centralized places (like social media), you CANNOT. Joe cannot take his data off of Facebook and interact with it outside of the platform or move it to another platform.
If you happen to agree with it, then yeah, it's great. If you like to commit quick and dirty and then tidy it up by squashing into logically complete and self-consistent commits, too bad.
You might like git-bug:
This should be one of the very first links in the readme.
i rewrote the README with the goal of providing a clear overview of git-bug's features, and why you might want to use it, and ensuring that for those who are more technically inclined, things like the data model, internal architecture, and more were easy to find under the documentation folder (whether you're browsing through the files directly, or landing on //doc:README.md, which links to the files and folders under //doc.
if you think that there is information missing from the README, or hard to find in the repository (either by browsing through it, or clicking the rather prominent links from the main README), i'd welcome any suggestions in the form of a PR.
The tag-line covers it pretty well I thought?
"git-bug is a standalone, distributed, offline-first issue management tool that embeds issues, comments, and more as objects in a git repository (not files!), enabling you to push and pull them to one or more remotes."
That tells you what the feature is - if you need/want a more technical overview you can still get from the `README` to `entity data model` in two clicks (Documentation > Data model).
Embrace, Extend..
(largely this is unfair, as plain git leaves much to be desired- but you can’t deny that the things surrounding git on github are very sticky).
However, were you to say liken-able (slang keywords: comparative something else--) of, "fossil with git-github", then again: no.
Good call were the conversation (comments, almost interchangeable at-times haha!) being, everyone use git for Firefox, something kinda wild-topic!
That's what Github is though, it's not about the code itself it's about all your project management being on Github, and once you move it, moving out isn't realistic.
The issue tracking can be a branch and then you just need a compatible UI. In fact some git front ends do exactly this.
CI/CD does already exist in git via githooks. And you’re already better off using make/just/yarn/whatever for your scripts and rely as little on YAML as possible. It’s just a pity that githooks require users to set up each time so many people simply don’t bother.
That's how we started out.
There are several such solutions already. The problem is that neither of them is popular enough to become a de facto standard. And, of course, centralized git providers like GitHub have a vested interest in keeping in this way, so they are unlikely to support any such solution even if it does become popular enough.
For the actual event we are commenting on, they have disabled all features other than code hosting and PRs.
It's very silly they have to do this, but at least they can I suppose.
Sad to see that Mozilla is becoming less and less what they promised to be once Google funding are depleting.
If you weren't connected to the internet, you couldn't do a thing. You couldn't checkout. You couldn't commit. You could create branches. The only thing on your computer was whatever you checked out last time you were connected to the server.
People talk about SVN, but it wasn't that common in 2005. None of the project hosting platforms (like SourceForge) supported SVN, they were all still offering CVS. If you wanted to use SVN, you had to set it up on your own server. (From memory, google code was the first to offer SVN project hosting in mid-2006). Not that SVN was much better than CVS. It was more polished, but shared all the same workflow flaws.
Before Git (and friends), nothing like pull-requests existed. If you wanted to collaborate with someone else, you either gave them an account on your CVS/SVN server (and then they could create a branch and commit their code), or they sent you patch files over email.
The informal email pull requests of git were an improvement... though you still needed to put your git repo somewhere public. Github and its web-based pull requests were absolutely genius. Click a button, fork the project, branch, hack, commit, push, and then create a formal "pull request". It was nothing like centralised project management systems before it. A complete breath of fresh air.
And it was actually part of git. Even back in 2005, git included a script git request pull that generated these pull request emails. I'm pretty sure people called these emails "pull requests" before GitHub came along.
2006 appears to be the year that SVN finally became somewhat mainstream, which is interesting because git was released in 2005. Github launched in 2008 and by 2009, everyone seemed to be abandoning SVN.
It feels like SVN was only really "mainstream" for about 3 years, Maybe 5 years at most; There was some early-adopter lead-up and then a long tail of repos refusing to switch to git.
Maybe if Git had native support for PRs and issues this wouldn't have happened. (And yes I'm aware of git send-email etc.)
Edit: ripgrep was just a test
More: https://github.blog/engineering/the-technology-behind-github...
It was instantaneous. But where do I go from there? I cannot navigate through the code. It shows me where I can find that string, but that's it. I also cannot look at blame to see when that line got edited.
Though thanks a lot for bringing this onto my radar.
Not only results are incomplete but it seems once they went into training LLMs on all code they host they made sure no one else can do the same easily and so now everything is madly rate limited.
Every time I just clone and grep.
It's often useful. But sometimes you want to use other tools, like firing up your editor to explore.
And those updates are properly tracked by your version control, not done jankily by editing a commit and rebasing and force pushing.
Note we’re talking about the GitHub UI mostly. Pulling and merging a remote branch is a basic git operation, almost a primitive.
Didn't all this start with Linus getting into a spat with the bitkeeper dev involving some sort of punitive measure as a response to somebody making a reverse-engineered FOSS client? I don't remember the details and I'm sure I have at least half of them wrong, but that's easily one of the most disastrous decisions in the history of the software-business right up there with valve turning down minecraft and EA refusing to make sports games for the SEGA dreamcast (that last one isn't as well known but it led to SEGA launching the 2k sports brand to which outlasted the dreamcast and eventually got sold to a different company but otherwise still exists today and is still kicking EA's ass on basketball games).
But there were already quite a handful of other distributed version control systems around by the time git showed up.
So if Linus hadn't written git, perhaps we would be using darcs these days. And then we'd be debating whether people are using darcs the way it was intended. Or bazaar or monotone or mercurial etc.
I don't think what the original authors of any one tool intended matters very much, when there were multiple implementations of the idea around.
It's a joke that the bitkeeper dev has two revision control named after him, Mercurial and Git.
And while NBA 2k destroyed NBA Live it took until 2009 for that to start happening (long after Sega ownership), mainly down to sliding standards in EA’s NBA Live titles and eventually some disastrous EA launches.
Everything is fully and completely explained, in terms which mean nothing.
(They ain't perfect, of course.)
"In astronomy, declination (abbreviated dec; symbol δ) is one of the two angles that locate a point on the celestial sphere in the equatorial coordinate system, the other being hour angle. The declination angle is measured north (positive) or south (negative) of the celestial equator, along the hour circle passing through the point in question."
Anyone who doesn't know what declination is, know from reading the introductory paragraph of this scientific Wikipedia article?
Anyone? no? :-)
I rest my case, m'lud.
On a celestial sphere (planet, star, etc) the declination angle (being 0 is at the equator, being 90 degrees is the north pole of the sphere, being -90 degrees, is at the south pole).
You also need another angle known as the "hour angle" to locate a point on the sphere. It doesn't explain what that is, but as can be seen on Wikipedia, you can easily click on that word to go to the entire page that explains what it is.
What don't you understand?
Once again, not so difficult to figure out even if you have no experience in the specific technical field of a Wikipedia article. So I have no idea what /u/casenmgreen's problem is.
Why should this be a metric one would want Wikipedia to meet? It's an encyclopedia, not an astronomy course.
Of course, the brilliance of Wikipedia is that if you think you can write a clearer intro, you can do so! You could even add it to the simple language version of the page - https://simple.wikipedia.org/wiki/Declination
If you push rewritten history to master, you're a git.
Conclusion: learn your tools.
Squashed commits are strictly worse than plain, non-fast-forwarded merges from rebased branches.
The thing is, we could have done better (and have been) since before git even existed.
It's not my favourite process, but...
But GH's PR process is broken anyways. I miss Gerritt.
Also, git store the files in a smarter way so file size won't explode like zip versioning.
Or previous versions. Plural. Yes.
Well, that's one half of git. The other half is tooling to work with the snapshots and their history, eg to perform merges.
In the Linux kernel the project management is done via email (which is also just a centralized webserver in the end), so whats the problem?
From what I use composer and brew relies on GitHub to work.
And: Even though source of truth is centralized for many projects in GitHub, git still benefits from being distributed: It's the basis for "forks" on VithUb and for the way people develop. Ja jung the clone locally and committing locally and preparing the change set for review. In the CVS/SVN days one had to commit to the ce teal branch way sooner and more direct.
Then later on for the PR, you can sanitise the whole thing for review.
In the bad old days, you only got the latter. (Unless you manually set up an unrelated repository for the former yourself.)
That's the default. But git would work just as well, if by default it was only cloning master, or even only the last few commits from master instead of the full history.
You can get that behaviour today, with some options. But we can imagine an alternate universe were the defaults were different.
Most of what you say, eg about not needing lockfiles and being able to make independent offline commits, still applies.
Funny enough, this is more or less exactly the architecture some of those Haskell-weirdos would come up with. It's essentially a copy-on-write filesystem.
(Haskell people are weirdos compared to good old fashioned operating system people who use C as God intended.)
The general issue that git has is making them interact with each other, I would love for git to get distributed issues, and a nice client UI that is actually graphical and usable by non-terminal users.
There were some attempts to make this distributed and discoverable via similar seed architectures like a DHT. For example, radicle comes to mind.
But staying in sync with hundreds of remotes and hundreds of branches is generally not what git is good at. All UIs aren't made for this.
I'm pointing this out because I am still trying to build a UI for this [1] which turned out to be much more painful than expected initially.
I am contributing to a few open source projects on GitHub here and there though.
Git is by far the most widely used VCS. The majority of code hosting services use it.
My clients don't use GitHub.
Most of my clients do use Git. (some use other VCS)
What made you think I thought differently?
I store my code in a completely distributed fashion, often in several places on different local devices (laptop, build server, backup, etc) not to mention on remote systems. I use github and gitlab for backup and distribution purposes, as well as alternative ways people can share code with me (other than sending patch emails), and other people use git to get and collaborate on my work.
distributed version control system doesn't mean distributed storage magically happens. You still need to store your code on storage you trust at some level. The distributed in DVCS means that collaboration and change management is distributed. All version control operations can be performed on your own copy of a tree with no other involvement. Person A can collaborate with person B, then person B can collaborate with person C without person A being in the loop, etc.
Gitorious was chosen for the meego/maemo team for example.
And I am one of the people saddened by the convergence on a single platform.
But you can't deny, it's always been pretty great.
Hardly surprising, though, social networks are prone to centralization (due to network effect), and GitHub & its competitors (anything that offers git repos + issue tracking + fork-ability, really) are social networks.
Also, GitHub offering private repos for free right after they got acquired by Microsoft helped a lot. A lot of people, myself included, were using gitlab.com for private repos at that time
People who are very insistent on distributed solutions never seem to understand that the economic, social and organizational reasons for division of labor, hierarchy and centralization didn't suddenly go away.
It's not like the hairy C++ code base of Firefox will suddenly become less scary and attract more open source developers simply because it's moving to Github.
Sure, there would be local copies everywhere, but for a distribution version control system, it's pretty centralized at GitHub
Everything else... as the original comment said, is pretty centralized for a decentralized system.
This is what I don't get ... what is the alternative to GitHub?
And as I can see here, no one else did ...
It wont be free software and, likely, it will be Microsoft.
Bad PRs all around, with just a constant stream of drive by "why no merge?!?!?!" comments.
Even before this Mozilla almost certainly used hundreds of closed source tools, including things like Slack, Excel, Anaplan, Workday, etc.
issues are stored in git bug and automatically synced. Github is the only viable option, but you can keep the others as mirrors when github chooses to strike you.
They should restructure instead, hire people who actually want to work on software and not use corporation and foundation around it as platform for their... peculiar "endeavours". But I doubt that's gonna happen - flow of Google cash and from all those naive people who think supporting Mozilla directly contributes to Firefox is too good it seems. But then it's understandable they do this - money from Google tap can get twisted.
Think you might be on something, with the incoming end of Google cash flow, Firefox may be in discussion with bing and that could be part of the agreement, use Microsoft server.
Perhaps Microsoft offered to pick up the tab that Google has been paying, but is now imperiled, or at least lend some sort of financial support, and Firefox cares more about paying their bills than open source
The bad news is that their build system is extremely hand-rolled, and so if it works for you, count yourself lucky, because when it doesn't work you're in for 4 hours of python hell
The killer feature is collocation of features to a single forge, combined with a generous free tier it’s the windows xp of the ecosystem: everybody has it, everybody knows it, almost nobody knows anything else.
As for PRs: I'm sure Mozilla welcome contributions, but accepting GitHub PRs is going to be a recipe for thousands of low-value drive-by commits, which will require a lot of triage.
I agree it is rather basic but I don't see how it's hard to navigate.
> accepting GitHub PRs is going to be a recipe for thousands of low-value drive-by commits, which will require a lot of triage.
I don't think that really happens based on what I've seen of other huge projects on GitHub.
Jira and bugzilla are vastly superior to GH Issues.
Jira doesn't even deserve 10% of the hate it gets. Most of what makes Jira awful is the people using it. Bugzilla is getting a bit long in the tooth, but at least it's still free and open source.
I think you're in the tiny minority with that opinion.
> Most of what makes Jira awful is the people using it.
Not even close. Yes, people aren't good at administering it, but there are soooo many reasons that it's shit apart from that. Not least the hilarious slowness. Jira Cloud is so slow that not even Atlassian use it.
Also I don't think you can just say "you're holding it wrong". Part of the reason people screw up Jira configs so much is that it makes it so easy to screw them up. You can't separate the two.
> but at least it's still free and open source.
Just being open source doesn't make something good.
I'm not. The whole "I hate Jira thing" is a meme among a very vocal minority of tech enthusiasts. They don't have tens of millions of users because Jira is awful. The reason why so many people cry about it (apart from the meme-factor) is that people conflate Jira with their team's failed approach at scrum.
Sure, it has rough edges, and sure, Atlassian as a company sucks. I have a bug report open on their Jira for some 20 years and I don't think it will ever get fixed. And yes, Jira Cloud is very slow, it's ridiculous. And in spite of that, GH Issues is still objectively worse. It's so far behind in terms of features that it isn't even a fair comparison.
Strongly agree with this. The "Jira is bad" meme is way overblown, and is driven primarily by bad choices individual Jira administrators have made. Yes, you can turn Jira into a hellscape. You can also turn any ticket system into a hellscape if it gives you enough ability to customize it. The problem isn't Jira, the problem is companies who have a terrible workflow and torture Jira into a shape that fits their workflow.
It absolutely isn't. My colleagues are not very vocal tech enthusiasts and they hate it too.
> They don't have tens of millions of users because Jira is awful.
They have tens of millions of users because Jira isn't awful for the people paying for it. But those people aren't actually using it to create & read bugs. They're looking at pretty burndown charts and marveling at the number of features it has.
It's classic enterprise software - it doesn't need to be good because it isn't sold to people actually using it.
github.com broke noscript/basic (x)html interop for most if not all core functions (which were working before). The issue system was broken not that long time ago.
And one of the projects which should worry about, even enforce, such interop, moving to microsoft github...
The internet world is a wild toxic beast.
mozilla-central has a LOT of tests -- each push burns a lot of compute hours.
Some associated projects are using more GitHub stuff.
I was thinking something different: I wonder whether Mozilla considered GitLab or Codeberg, which are the other two I know that are popular with open source projects that don't trust GitHub since it sold out to Microsoft.
(FWIW, Microsoft has been relatively gentle or subtle with GitHub, for whatever reason. Though presumably MS will backstab eventually. And you can debate whether that's already started, such as with pushing "AI" that launders open source software copyrights, and offering to indemnify users for violations. But I'd guess that a project would be pragmatically fine at least near term going with GitHub, though they're not setting a great example.)
(Arguing may come next, but first comes communicating.)
Here is my opinion on Mozilla and their direction: in the previous decade (or a little bit more) or so their primary money bringer was Google, paying for the default search engine placement in Firefox. Annually, that brought about half a billion US (I don't have the exact amounts, but let's assume in that decade they should have earned a few billions).
At the same time, Firefox continuously lost market share and with that, the power to steer the web in the direction of privacy (1) and open standards (2).
(1) instead, they've acquired Anonym, an ad business which touts itself to be interested in user's privacy. color me sceptic on that one.
(2) it's all Chrome and iOS. Firefox is a lagard.
So, what has Mozilla done with the billions? Have they invested it in Firefox? MDN perhaps? Are they the web champions they have been in 2010s?
You can still argue that these points are shallow. My original comment was motivated by my disappointment in Mozilla's lost opportunity to be a fighter for an open web. Instead they have sold their soul to the highest (and only) bidder.
This is fair but not sufficient to declare "the last thing they want is good data protection laws".
> it's all Chrome and iOS.
> So, what has Mozilla done with the billions?
This is also fair but has nothing to do with the data protection laws.
> Instead they have sold their soul to the highest (and only) bidder.
It seems they can't continue doing this, given the ongoing legal actions against Google. So let's see.
> This is also fair but...
Ok, so we can agree that my assesment is fair, but it remains to be seen how the data protection story pans out.
>> Instead they have sold their soul to the highest (and only) bidder.
> It seems they can't continue doing this, given the ongoing legal actions against Google. So let's see.
Just to be clear: I think that Mozilla should have taken that money (and possibly more) and *invest* in Firefox and build a rainy day fund (which are coming soon). Instead, they spent it on whatevers and done a layoff.
My point is that your assessment is largely irrelevant to your original message about the data protection. It doesn't really support it.
"It depends", as always, but codeberg lacks features (that your use-case may not need, or may require), uptime/performance (that may be crucial or inconsequential to your use-case), familiarity (that may deter devs), integration (that may be time-consuming to build yourself or be unnessecary for your case) etc etc.
It's a pet-peeve and personal frustration of mine. "Do one thing and do that well" is also often forgotten in this part of Open Source projects. You are building a free alternative to slack? spend every hour on building the free alternative to slack, not on selfhosting your Gitlab, operating your CI-CD worker-clusters or debugging your wiki-servers.
That was the point of the (obviously ill-received) joke.
No serious engineer will read that line and think, "wow, how malicious of mozilla, they just made the move to close all bug reports at once".
Never underestimate the cynicism of HN commenters.