Do we?
So it's pretty safe to say some (many?) attribute inappropriate credence to LLM outputs. It's eating our minds.
What I've found surprising is that the __proto__ string is a fixed set from the strings sampling set. Whereas I'd have expected the function to return random strings in the range given.
But maybe that's my biased expectation being introduced to property-based testing with random values. It also feels like a stretch to call this a property-based test, because what is the property "setters and getters that work"? Cause I expect that from all my classes.
So what? This line of what-if reasoning is so annoying especially when it's analysis for a language like javascript. There's no vulnerability found here and most web developers are well aware of the risky parts of the language. This is almost as bad as all the insane false positives SAST scans dump on you.
Oh I'm just waiting to get dogpiled by people who want to tell me web devs are dumber than them and couldn't possibly be competent at anything.
In my experience this really isn’t true. Most web developers I know are not familiar (enough) with prototype pollution.
By the way, this isn’t because they are “dumb”. It’s the tool’s fault, not the craftsman’s, in this case. Prototype pollution is complicated and surprising
I don't think this is true, and I think that's supported by the success of JavaScript: The Good Parts.
It would be unfair to characterise a lack of comprehensive knowledge of JavaScript foot-guns as general incompetence.