Show HN: C discrete event SIM w stackful coroutines runs 45x faster than SimPy
Hi all,

I have built Cimba, a multithreaded discrete event simulation library in C.

Cimba uses POSIX pthread multithreading for parallel execution of multiple simulation trials, while coroutines provide concurrency inside each simulated trial universe. The simulated processes are based on asymmetric stackful coroutines with the context switching hand-coded in assembly.

The stackful coroutines make it natural to express agentic behavior by conceptually placing oneself "inside" that process and describing what it does. A process can run in an infinite loop or just act as a one-shot customer passing through the system, yielding and resuming execution from any level of its call stack, acting both as an active agent and a passive object as needed. This is inspired by my own experience programming in Simula67, many moons ago, where I found the coroutines more important than the deservedly famous object-orientation.

Cimba turned out to run really fast. In a simple benchmark, 100 trials of an M/M/1 queue run for one million time units each, it ran 45 times faster than an equivalent model built in SimPy + Python multiprocessing. The running time was reduced by 97.8 % vs the SimPy model. Cimba even processed more simulated events per second on a single CPU core than SimPy could do on all 64 cores.

The speed is not only due to the efficient coroutines. Other parts are also designed for speed, such as a hash-heap event queue (binary heap plus Fibonacci hash map), fast random number generators and distributions, memory pools for frequently used object types, and so on.

The initial implementation supports the AMD64/x86-64 architecture for Linux and Windows. I plan to target Apple Silicon next, then probably ARM.

I believe this may interest the HN community. I would appreciate your views on both the API and the code. Any thoughts on future target architectures to consider?

Docs: https://cimba.readthedocs.io/en/latest/

Repo: https://github.com/ambonvik/cimba

  • jerf
  • ·
  • 1 hour ago
  • ·
  • [ - ]
While that speed increase is real, of course, you're really just looking at the general speed delta between Python and C there. To be honest I'm a bit surprised you didn't get another factor of 2 or 3.

"Cimba even processed more simulated events per second on a single CPU core than SimPy could do on all 64 cores"

One of the reasons I don't care in the slightest about Python "fixing" the GIL. When your language is already running at a speed where a compiled language can be quite reasonably expected to outdo your performance on 32 or 64 cores on a single core, who really cares if removing the GIL lets me get twice the speed of an unthreaded program in Python by running on 8 cores? If speed was important you shouldn't have been using pure Python.

(And let me underline that pure in "pure Python". There are many ways to be in the Python ecosystem but not be running Python. Those all have their own complicated cost/benefit tradeoffs on speed ranging all over the board. I'm talking about pure Python here.)

Good point. The profiler tells me that the context switch between coroutines is the most time-consuming part, even if I tried to keep it as light as possible, so I guess the explanation for "only" getting 45x speed improvement rather than 100x is that it is spending a significant part of the time moving register content to and from memory.

Any ideas for how to speed up the context switches would be welcome, of course.

Didn’t read the code yet, but stuff like this tend to be brittle. Do you do something clever around stack overflow, function return overwrite or would that just mess up all coroutines using the same stack?
Each coroutine is running on its own stack. They are fixed size stacks, at least for now, so that could be a tender point, but I place some sentinel values at the end to try to capture it in an assert() instead of just letting it crash. I did not think it worth the effort and speed penalty to implement growing stacks yet. However, I do catch any coroutine function returns safely instead of letting them fall off the end of their stack.
Looks really cool and I'm going to take a closer look tonight!

How do you do the context switching between coroutines? getcontext/setcontext, or something more architecture specific? I'm currently working on some stackful coroutine stuff and the swapcontext calls actually take a fair amount of time, so I'm planning on writing a custom one that doesn't preserve unused bits (signal mask and FPU state). So I'm curious about your findings there

Hi, it is hand-coded assembly. Pushing all necessary registers to the stack (including GS on Windows), swapping the stack pointer to/from memory, popping the registers, and off we go on the other stack. I save FPU flags, but not more FPU state than necessary (which again is a whole lot more on Windows than on Linux).

Others have done this elsewhere, of course. There are links/references to several other examples in the code. I mention two in particular in the NOTICE file, not because I copied their code, but because I read it very closely and followed the outline of their examples. It would probably taken me forever to figure out the Windows TIB on my own.

What I think is pretty cool (biased as I am) in my implementation is the «trampoline» that launches the coroutine function and waits silently in case it returns. If it does, it is intercepted and the proper coroutine exit() function gets called.

Interesting. How does the trampoline work?

I'm wondering whether we could further decrease the overhead of the switch on GCC/clang by marking the push function with `__attribute__((preserve_none))`. Then among GPRs we only need to save the base and stack pointers, and the callers will only save what they need to

It is an assembly function that does not get called from anywhere. I pre-load the stack image with its intended register content from C, including the trampoline function address as the "return address". On the first transfer to the newly created coroutine, that gets loaded, which in turn calls the coroutine function that suddenly is in one of its registers along with its arguments. If the coroutine function ever returns, that just continues the trampoline function, which proceeds to call the coroutine_exit() function, whose address also just happens to be stored in another handy register.

https://github.com/ambonvik/cimba/blob/main/src/port/x86-64/...

https://github.com/ambonvik/cimba/blob/main/src/port/x86-64/...

Ahhh ok. Cool!

Do sanitizers (ASan/UBSan/valgrind) still work in this setting? Also I'm wondering if you'll need some special handling if Intel CET is enabled

I don't know enough about event simulation to talk API design in depth but I find the stackful coroutine approach super interesting so I'll be taking a look at the code later!

Do you plan on accepting contributions or do you see the repo as being a read-only source?

I would be happy accepting contributions, especially for porting to additional architectures. I think the dependency is relatively well encapsulated (see src/port), but code for additional architectures needs to be well tested on the actual platform, and there are limits to how much hardware fits on my desk.
[dead]