1] https://github.com/joerick/pyinstrument
2] https://github.com/benfred/py-spy
3] https://github.com/P403n1x87/austin
For PHP and for browser's developer tools there is such kind of profiler. But judging by screenshots, this profiler cannot produce such a graph. So its usability to solve memory leak/high consumption issues is limited.
To summarize, what I usually need is not a line number, but a path to objects using/holding most memory.
1. Allocation profiler
2. Heap analyzer
Allocation profilers will capture data about what is allocating memory over time. This can be captured in real time without interrupting the process and is usually relatively low-overhead.
Heap analyzers will generally take a heap dump, construct an object graph, do various analyses, and generate an interactive report. This generally requires that you pause a program long enough to create a heap dump, which is often multiple GB or more in size, write it to disk, then do the subsequent analysis and report generation.
I agree that 2) is generally more useful but I assume both types of profilers have their place and purpose.
That's a problem with many of the profiling tools around Python. They often support Windows badly or not at all.
Underneath it's still substantially similar to good old Windows NT.
There's a Linux "subsystem". Well, two of them. WSL1 is an API translation layer that ends up being cripplingly slow. Don't use it. WSL2 is more of a VM that just runs a Linux distro. This is before you get into third party compatibility layers like cygwin and mingw.
Sigh, why infest everything with "AI".
https://lobste.rs/s/ytjc8x/why_i_m_skeptical_rewriting_javas...
The rewrite discussion is here: https://news.ycombinator.com/item?id=41898603
I wanted to submit a ticket for my use case but I can't find a minimal program & setup to reproduce the issue. I just know it in my gut it has to do with the mix of multiprocessing (fork) + async + threading.
Always ask if they assume it's network bound or they have measurements. Measurements may sometimes be wrong, but assumptions are more often wrong than right in performance engineering.
As a (former) NetEng, it bothers me to no end that so many people claim "it's the network" when their application is slow / broken, without understanding the actual problem.
In addition to being slow and unsuited for abstractions, people write horrible code with layers and layers of abstractions in Python. These tools can sometimes help with that.
People who do write streamlined code that necessarily uses C-extensions in Python will probably use cachegrind/helgrind/gprof etc.
Or switch to another language, which avoids many categories of other issues.