Compiling high level languages to assembly is a deterministic procedure. You write a program using a small well defined language (relative to natural language every programming language is tiny and extremely well defined). The same input to the same compiler will get you the same output every time. LLMs are nothing like a compiler.
Is there any compiler that "rolls the dice" when it comes to optimizations? Like, if you compile the exact same code with the exact same compiler multiple times you'll get different assembly?
And th Alan Kay quote is great but does not apply here at all? I'm pointing out how silly it is to compare LLMs to compilers. That's all.
Rolling the dice is accomplished by mixing optimizations flags, PGO data and what parts of the CPU get used.
Or by using a managed language with dynamic compiler (aka JIT) and GC. They are also not deterministic when executed, and what outcome gets produced, it is all based on heuristics and measured probabilities.
Yes, the quote does apply because many cannot grasp the idea of how technology looks beyond today.
But the compiler doesn't "roll the dice" when making those guesses! Compile the same code with the same compiler and you get the same result repeatedly.
Who thinks otherwise, even if LLMs are still a bit dumb today, is fooling themselves.