One must note that this is impossible, unless you have chosen to handicap the C-implementations while benchmarking. Borderline unethical IMO to put forth such a claim.
I must have misunderstood what you were objecting to then, my bad. What claim are they making that is so impossible that it borders on being unethical?
I mentioned JIT because it seems to be based on a similar principle at least, that of optimizing things on the programmer's behalf by looking at the program's usage and not just by looking at how to speed up the code generally.
My claim is that there is no additional information that Python provides as opposed to C that would make it faster. And hence, the only conclusion I have is either they have supercharged their compiler for that particular benchmark OR they have chosen to handicap C as once can express the computation in C that emits the same assembly that they lowered to and hence my point on handicapping the C benchmark.
> My claim is that there is no additional information that Python provides as opposed to C that would make it faster
Ok, but that's not what they are claiming - their claim (at least based on what the article is saying) is more about one toolchain vs another, i.e. "if you use our compiler (that takes python code as input) then the resulting executable will run as fast (or possibly faster than) programs created by all the popular compilers (that take C/C++ code as input)." The sales pitch is that they've got magic sauce in their compiler, and you get to use Python as well.
Beating gcc/clang/icc is not a trivial task. One can engineer the pair `{benchmark compiler pass}` in such a manner that they show a speedup, but over a benchmark suite say like the SPEC suite, general community consensus is that it is very difficult and their paper doesn't reveal that they've found a secret sauce (sauce := compiler pass).
A common occurrence is something like a pool of 10 actions where a bunch of tests each do 3 to 7 of them. This is very hard to abstract with a function call.
In the case I was referring to, mainly to keep the code consistent with what was already there and the PRs small, as the code base is primarily owned by a different team. It's also pretty innocuous short tests that read well as they are.
For postdocs, the situation is even worse. Postdocs typically get paid ~1.5x that of a PhD student. And in some fields that are closer to the Basic Sciences, one (virtually) cannot land a permanent research position without spending 2+ years as a PostDoc research associate. The current generation of professors are the worst folks when it comes to the reasoning about sustainability of research in their field.