I have been developing Hekate, a ZK engine written in Rust that utilizes a Zero-Copy streaming model and a hybrid tiled evaluator. To test its limits, I ran a head-to-head benchmark against Binius64 on an Apple M3 Max laptop using Keccak-256.
The results highlight a significant architectural divergence:
At 2^15 rows: Binius64 is faster (147ms vs 202ms), but Hekate is already 10x more memory efficient (44MB vs ~400MB).
At 2^20 rows: Binius64 hits 72GB of RAM usage, entering swap hell on a laptop. Hekate processes the same workload in 4.74s using just 1.4GB of RAM.
At 2^24 rows (16.7M steps): Hekate finishes in 88s with a peak RAM of 21.5GB. Binius64 is unable to complete the task due to OOM/Swap on this hardware.
The core difference is "Materialization vs. Streaming". While many engines materialize and copy massive polynomials in RAM during Sumcheck and PCS operations, Hekate streams them through the CPU cache in tiles. This shifts the unit economics of ZK proving from $2.00/hour high-memory cloud instances to $0.10/hour commodity hardware or local edge devices.
I am looking for feedback from the community, especially those working on binary fields, GKR, and memory-constrained SNARK/STARK implementations.
There is a massive, widening gap between academic brilliance and silicon-level implementation. You can write the most elegant paper in the world, but if your prover requires 100GB of RAM to execute a basic trace, you haven't built a protocol, you've built a research project that collapses under its own weight.
I don't have "strategic planning" committees or HR-mandated consensus. If Hekate's core doesn't meet my performance standards, I rewrite it in 48 hours. This agility is a weapon. I want to prove that a single engineer, driven by physics and zero-copy principles, can wreck the unit economics of a multi-million dollar venture-backed startup.
Disrupting inefficient financial models is more than fun—it's necessary. The current "safe" hiring meta (US-only, HR-compliant, resume-padded candidates) is a strategic failure. While industry leaders focus on compliance, state-sponsored actors like Lazarus are eating their lunch.
You don't need "safe" candidates. You need predators. You need the difficult, inconvenient outliers who don't need a visa to outcode your entire department. Hekate is a reminder that in deep-tech, capital is noise, but performance is the only signal that matters.
I don't have "strategic planning" committees or HR-mandated consensus...
Look, if your code is better just say it's better. But this kind of LinkedIn slop conspiracist virtue signaling isn't a good look. It's fine to believe that but you should never say it out loud.
A "row" in this context is a single step of the Keccak-f[1600] permutation within the AIR (Algebraic Intermediate Representation) table. Most engines materialize this entire table in RAM before proving. At 2^24 rows, that’s where you hit the "Memory Wall" and your cloud bill goes parabolic.
Hekate is "better" because it uses a Tiled Evaluator to stream these rows through the CPU cache (L1/L2) instead of saturating the memory bus. While Binius64 hits 72GB RAM on 2^20 rows, Hekate stays at 21.5GB for 16x the workload (2^24).
The "committees" comment refers to the gap between academic theory and hardware-aware implementation. One prioritizes papers; the other prioritizes cache-locality. Most well-funded teams choose the easy path (more RAM, more AWS credits) over the hard path (cache-aware engineering).
If you want to talk shop, tell me how you'd handle GPA Keys computation at 2^24 scale without a zero-copy model. I’m genuinely curious.
In Hekate's Keccak AIR, the relationship is ~25 trace rows per 1 Keccak-f[1600] permutation.
2^24 Rows = The raw size of the execution trace matrix (height). ~671k Permutations = The actual cryptographic workload (equivalent to hashing ~90MB of data).
The benchmark compares the cost to prove the same cryptographic work, regardless of internal AIR row mapping.
THE MANIFESTO: https://github.com/oumuamua-corp/hekate