Your package managers — pip, npm, docker, cargo, helm, go, all of them — talk directly to it using their native protocols. Security scanning with Trivy, Grype, and OpenSCAP is built in, with a policy engine that can quarantine bad artifacts before they hit your builds. And if you need a format it doesn't support yet, there's a WASM plugin system so you can add your own without forking the backend.
Why I built it:
Part of what pulled me into computers in the first place was open source. I grew up poor in New Orleans, and the only hardware I had access to in the early 2000s were some Compaq Pentium IIs my dad brought home after his work was tossing them out. I put Linux on them, and it ran circles around Windows 2000 and Millennium on that low-end hardware. That experience taught me that the best software is software that's open for everyone to see, use, and that actually runs well on whatever you've got.
Fast forward to today, and I see the same pattern everywhere: GitLab, JFrog, Harbor, and others ship a limited "community" edition and then hide the features teams actually need behind some paywall. I get it — paychecks have to come from somewhere. But I wanted to prove that a fully-featured artifact registry could exist as genuinely open-source software. Every feature. No exceptions.
The specific features came from real pain points. Artifactory's search is painfully slow — that's why I integrated Meilisearch. Security scanning that doesn't require a separate enterprise license was another big one. And I wanted replication that didn't need a central coordinator — so I built a peer mesh where any node can replicate to any other node. I haven't deployed this at work yet — right now I'm running it at home for my personal projects — but I'd love to see it tested at scale, and that's a big part of why I'm sharing it here.
The AI story (I'm going to be honest about this):
I built this in about three weeks using Claude Code. I know a lot of you will say this is probably vibe coding garbage — but if that's the case, it's an impressive pile of vibe coding garbage. Go look at the codebase. The backend is ~80% Rust with 429 unit tests, 33 PostgreSQL migrations, a layered architecture, and a full CI/CD pipeline with E2E tests, stress testing, and failure injection.
AI didn't make the design decisions for me. I still had to design the WASM plugin system, figure out how the scanning engines complement each other, and architect the mesh replication. Years of domain knowledge drove the design — AI just let me build it way faster. I'm floored at what these tools make possible for a tinkerer and security nerd like me.
Tech stack: Rust on Axum, PostgreSQL 16, Meilisearch, Trivy + Grype + OpenSCAP, Wasmtime WASM plugins (hot-reloadable), mesh replication with chunked transfers. Frontend is Next.js 15 plus native Swift (iOS/macOS) and Kotlin (Android) apps. OpenAPI 3.1 spec with auto-generated TypeScript and Rust SDKs.
Try it:
git clone https://github.com/artifact-keeper/artifact-keeper.git
cd artifact-keeper
docker compose up -d
Then visit http://localhost:30080Live demo: https://demo.artifactkeeper.com Docs: https://artifactkeeper.com/docs/
I'd love any feedback — what you think of the approach, what you'd want to see, what you hate about Artifactory or Nexus that you wish someone would just fix. It doesn't have to be a PR. Open an issue, start a discussion, or just tell me here.
Now that you've implemented, was there a reason you didn't go for such an approach so that you would worry about less as someone hosting something like this?
On the other hand, it also shows that it took three weeks, so why should I use this instead of building a custom toolchain myself that is optimised for what I need and actually use? Trimming away the 45+ formats to the 5 or so that matter to my project. It raises the question - is 'enterprise' software doomed in favour of a proliferation of custom built services where everybody has something unique, or is the real value in the 'support' packages and SLAs? Will devs adopt this and put 'Artifact Keeper' on their CV, or will they put 'built an artifact toolchain with Claude'?
But then again, kudos to you for building something that can (and probably should) eat the lunch of the enterprise-grade tools that are simply unaffordable to small business, individual contractors, and underfunded teams. Truth be told, I'm not going to build my own, so this is certainly something I want to put in a sandbox and try out, and also this is inspirational and may finally convince me that I should give Claude a fair go if it's capable of being guided to create high quality output.
I have been playing with the idea of using a single git repository to host them, Java packages as an Ivy repository and JavaScript packages as simply the contents of node_modules.
Anybody does something similar?
Part of the reason we pay the big license fee is so we have someone to turn to when it inevitably breaks because we’ve used it in a way nobody has before. In Jan last year we were using 30TB of artifact storage in S3. That’s 140TB today.
Where do you get your CVE data? Would built artifacts have their CVEs updated after the fact? Do you have blocking policies on artifacts based on CVEs, licenses, artifact age, etc?
Edit: the project if anyone reading this is interested: http://github.com/asfaload/asfaload (looking for feedback!)