PR Quiz uses AI to generate a quiz from a pull request and blocks you from merging until the quiz is passed. You can configure various options like the LLM model to use, max number of attempts to pass the quiz or min diff size to generate a quiz for. I found that the reasoning models, while more expensive, generated better questions from my limited testing.
Privacy: This GitHub Action runs a local webserver and uses ngrok to serve the quiz through a temporary url. Your code is only sent to the model provider (OpenAI).
This is a good question, but also how do we make sure that humans understand the code that _other humans_ have (supposedly) written? Effective code review is hard as it implies that the reviewer already has their own mental model about how a task could/would/should have been done, or is at the very least building their own mental model at reading-time and internally asking 'Does this make sense?'.
Without that basis code review is more like a fuzzy standards compliance, which can still be useful, but it's not the same as review process that works by comparing alternate or co-operatively competing models, and so I wonder how much of that is gained through a quiz-style interaction.
If someone were to submit this code for review:
getUser(id: number): UserDTO {
return this.mapToDTO(this.userModel.getById(id));
}
and I knew that `userModel` throws an exception when it doesn't find a user (and this is typescript, not java, where exceptions are not declared in the method prototype) then I would tell them to wrap it in a try-catch. I would also probably tell them to change the return type to `UserDTO | null` or `Result<UserDTO>` depending on the pattern that we chose for the API. I don't need to know anything about the original ticket in order to point these things out, and linters most likely won't catch them. Another use for code review is catching potential security issues like SQL injection that the linter or framework can't figure out (i.e, using raw SQL queries in your ORM without prepared statements)The initial idea was applied to classroom settings.
An Inquisitive Code Editor for Addressing Novice Programmers’ Misconceptions of Program Behavior https://austinhenley.com/pubs/Henley2021ICSE_Inquisitive.pdf
> Your code is only sent to the model provider (OpenAI)
When has this become an acceptable « privacy » statement?
I feel we are reliving the era of free mobile apps at the expense of harvesting any user data for ads profiling before GDPR kicked in…
So yes this is the second part of the privacy statement
We've got a huge LGTM problem where people approve PRs they clearly don't understand.
Recently we had a bug in some code of an employee that got laid off. The people who reviewed it are both still with the company, but neither of them could explain what the code did.
That triggered this angry tweet
Unless someone is getting fired for bad code the “lgtm” culture will never die.
They don't, that's why we need the PR in the first place.