"High-level CPUs" are a tarpit. Beautiful idea in theory, but history shows they're a Bad Idea.
Xerox, LMI, Symbolics, Intel iAPX 432, UCSD p-System, Jazelle and PicoJava—just dozens of fancy designs for Lisp, Smalltalk, Ada, Pascal, Java—yet none of them ever lasted more than a product iteration or three. They were routinely outstripped by MOT 68K, x86, and RISC CPUs. Whether your metric of choice is shipment volumes, price/performance, or even raw performance, they have routinely underwhelmed. A trail of tears for 40+ years.
I'm just a software guy, so my thoughts are probably painfully naive here.
My understanding is that we're mostly talking about economics - eg: that there's no way a Java/Lisp CPU could ever compete with a general purpose CPU. That's what I thought was the reason for the Symbolics CPU decline vs general purpose chips.
It seems like some hardware co-processing go a long way for some purposes though? GC in particular seems like it would be amenable to some dedicated hardware.
These days we're seeing dedicated silicon for "AI" chips in newer processors. That's not a high level CPU as described in the article, but it does seem like we're moving away from purely general purpose CPUs into a more heterogeneous world of devoting more silicon for other important purposes.
Those "AI" chips generally just optimize around a vector-multiply, allowing more parallelism with less precision. This is almost a lower level processor then a general purpose CPU, and doesn't really map to the "string compare instruction" type of functionality that Yossi Kreinin describes in the essay.
If you want to build something that is used you can't ignore economics. But in this case I think high-level CPUs are worse even if you could put economics aside.
GC in particular seems like it would be amenable to some dedicated hardware.
Those specialized network cards have ASICs that parse network packets and don't need additional memory to do their job. You can easily build a fixed-size hardware register to hold an IPv4 packet header, for example.
But a "high level CPU" is all about operating on high-level objects in memory, so your question becomes, how do you store a linked list in memory more efficiently than a RISC CPU? How do you pointer chase to the next node faster than a RISC CPU? Nobody has figured it out yet, and I agree with the article, I don't see how it's possible. CPUs are already general-purpose and very efficient.
Probably! Everything runs on economics. Network and graphics accelerators did just fine—but then they were (or became over time) high-volume opportunities. Volume drives investment; investment drives progress.
Xerox, LMI, Symbolics, Intel iAPX 432, UCSD p-System, Jazelle and PicoJava—just dozens of fancy designs for Lisp, Smalltalk, Ada, Pascal, Java—yet none of them ever lasted more than a product iteration or three. They were routinely outstripped by MOT 68K, x86, and RISC CPUs. Whether your metric of choice is shipment volumes, price/performance, or even raw performance, they have routinely underwhelmed. A trail of tears for 40+ years.