My (very amateur) belief is that we're in a quiet golden age of cool space viz/tools, for example:
- https://dmytry.github.io/space/
- https://www.tng-project.org/explore/3d
- https://github.com/da-luce/astroterm
- https://ssd.jpl.nasa.gov/tools
Feels like the halcyon techno-optimism from TOS + TNG!
However, note that the plot under "Native SIMD Throughput Comparison" is extremely misleading: for an accurate proportional comparison between bar charts, you should start the y-axis at zero. The way the data are presented makes it look like a 10-100x gain, rather than the actual 2x improvement.
> But "fine" starts to feel slow when you need dense time resolution. Generating a month of ephemeris data at one-second intervals is 2.6 million propagations per satellite.
Ok, except SGP4 loses its accuracy over WAY shorter time frames than a month (think hours/days)
> Pass prediction over a ground station network might need sub-second precision across weeks.
a) sub-second ephemeris for antenna pointing is crazy overkill, and b) same comment about accuracy as above.
If you calculate both, wouldn't it be even faster to just always do the complex calculation? (presumably that's more precise?)
The naming does imply that, but maybe they are simple vs complex, but also they're calculating different things? Seems like a stretch though.
Also the paragraph covering that part doesn't make much sense to me:
> This felt wasteful at first. Why compute both paths? But modern CPUs are so fast at arithmetic that computing both and selecting is often faster than branch misprediction. Plus, for SGP4, most satellites take the same path anyway, so we're rarely doing truly "wasted" work.
I'm always skeptical of claims about branch misprediction penalties without actual benchmarking (branch prediction is often very good!), and it also seems potentially undermined by the next sentence that "most satellites take the same path anyway", ie easily predictable.
But I don't understand at all what it means that because most satellites take the same path, the SIMD code is rarely doing wasted work, since the masking seems to be wasting part of the work by construction? (you could maybe handwave at pipelining or speculative execution making it irrelevant or wasted regardless, but no sign of those arguments).
The library seems good and always extremely nice when people produce write ups like this, but it might just be they're out over their skis when it comes to what was actually important about their optimizations.