Hacker Newsnew | past | comments | ask | show | jobs | submit | raphlinus's commentslogin

I don't agree that the clothoid is a math nightmare. One of the central problems you have to solve for roads is the offset curve. And a clothoid is extremely unusual in that its offset curve has a clean analytic solution. This won't be the case for the cubic parabola (which is really just a special case of the cubic Bézier).

Sure, you have to have some facility with math to use clothoids, but I think the only other curve that will actually be simpler is circular arcs.


I mean they are not a math nightmare per se if you’re comfortable with the theory. What I meant is that they become comparatively complex to integrate into a system like this. Think about arc length, compute intersections, reparametrization, etc., and with clothoids that usually means some complex numerical algorithms.

Using circular arcs or even simple third-degree polynomials (like cubic parabolas) reduces many of those operations to trivial O(1) function calls, which makes them much cheaper to evaluate and manipulate procedurally, especially when you're computing it 60 times per frame


You might be familiar with these, but GP wrote a couple of excellent pieces on Euler spirals:

https://raphlinus.github.io/curves/2021/02/19/parallel-curve...

https://levien.com/phd/euler_hist.pdf


Not true at all. I interacted with Meena[1] while I was there, and the publication was almost three years before the release of ChatGPT. It was an unsettling experience, felt very science fiction.

[1]: https://research.google/blog/towards-a-conversational-agent-...


The surprise was not that they existed: There were chatbots in Google way before ChatGPT. What surprised them was the demand, despite all the problems the chatbots have. The pig problem with LLMs was not that they could do nothing, but how to turn them into products that made good money. Even people in openAI were surprised about what happened.

In many ways, turning tech into products that are useful, good, and don't make life hell is a more interesting issue of our times than the core research itself. We probably want to avoid the valuing capturing platform problem, as otherwise we'll end up seeing governments using ham fisted tools to punish winners in ways that aren't helpful either


The uptake forced the bigger companies to act. With image diffusion models too - no corporate lawyer would let a big company release a product that allowed the customer to create any image...but when stable diffusion et al started to grow like they did...there was a specific price of not acting...and it was high enough to change boardroom decisions

ChatGPT really innovated on making the chat not say racist things that the press could report on. Other efforts before this failed for that reason.

Right. The problem was that people under appreciated ‘alignment’ even before the models were big. And as they get bigger and smarter it becomes more of an issue.

Well, I must say ChatGPT felt much more stable than Meena when I first tried it. But, as you said, it was a few years before ChatGPT was publicly announced :)

On the official Kuycon site, it says "Since 2023, Kuycon has partnered exclusively with ClickClack.io to bring its innovative line of monitors to customers outside of China[...]". I'm seriously considering getting one of these.


I had one of these as a kid, actually on loan from another microcomputer enthusiast. My dad and I had soldered an SDK-85 kit (which I have) and we swapped that for the KIM-1 with another microcomputer enthusiast. It's the machine where I first started to learn programming, in machine code, entered in hex.

There's something really appealing about machines this simple which has been lost in the modern era. But this particular board was very limited, there wasn't a lot you could actually do with it.


My reading is that there aren't really a lot of addressing modes on 286, as there are on 68000 and friends, rather every address is generated by summing an optional immediate 8 or 16 bit value and from zero to two registers. There aren't modes where you do one memory fetch, then use that as the base address for a second fetch, which is arguably a vaguely RISC flavored choice. There is a one cycle penalty for summing 3 elements ("based indexed mode").


What you say about memory indirect addressing is true only about MC68020 (1984) and later CPUs.

MC68000 and MC68010 had essentially the same addressing modes with 80286, i.e. indexed addressing with up to 3 components (base register + index register + displacement).

The difference is that the addressing modes of MC68000 could be used in a very regular way. All 8 address registers were equivalent, all 8 data registers were equivalent.

In order to reduce the opcode size, 80286 and 8086 permitted only certain combinations of registers in the addressing modes and they did not allow auto-increment and auto-decrement modes, except in special instructions with dedicated registers (PUSH, POP, MOVS, CMPS, STOS, LODS), resulting in an instruction set where no 2 registers are alike and increasing the cognitive burden of the programmer.

80386 not only added extra addressing modes taken from DEC VAX (i.e. scaled indexed addressing) but it made the addressing modes much more regular than those of 8086/80286, even if it has preserved the restriction of auto-incremented auto-decremented modes to a small set of special instructions.


There's a straightforward answer to the "why not" question: because it will result in codebases with the same kind of memory unsafety and vulnerability as existing C code.

If an LLM is in fact capable of generating code free of memory safety errors, then it's certainly also capable of writing the Rust types that guarantee this and are checkable. We could go even further and have automated generation of proofs, either in C using tools similar to CompCert, or perhaps something like ATS2. The reason we don't do these at scale is that they're tedious and verbose, and that's presumably something AI can solve.

Similar points were also made in Martin Kleppmann's recent blog post [1].

[1]: https://martin.kleppmann.com/2025/12/08/ai-formal-verificati...


It is also equally odd to me that people want to cling so hard to C, when something like Rust (and other modern languages for that matter), have so much nicer eco systems, memory safety aside. I mean C doesn't even have a builtin hashtable or vector, let alone pattern matching, traits and sum types. I get this is about AI and vibe coding, but we aren't at a point yet where zero human interaction is reasonable, so every code base should assume some level of hybrid human/AI involvement. Why people want so badly to start a new code base in C is beyond me (and yes, I've written a lot of C in my time, and I don't hate it, but it didn't age well in expressiveness).


> It is also equally odd to me that people want to cling so hard to C, when something like Rust (and other modern languages for that matter), have so much nicer eco systems, memory safety aside.

Simplicity? I learned Rust years ago (when it was still pre release), and when i now look at a lot of codebases, i can barely get a sense what is going on, with all the new stuff that got introduced. Its like looking at something familiar and different at the same time.

I do not feel the same when i see Go code, as so little has changed / got added to it. The biggest thing is probably generics and that is so rarely used.

For me, this is, what i think, appeals for C programmers. The fact that the language does not evolve and has been static.

If we compare this to C++, that has become a mess over time, and i know i am getting downvoted for this, Rust feels like its going way too much in the Rust++ route.

Like everybody and their dog wants something added, to make Rust do more things, but at the same moment, it feels like its repeating the C++ history. I have seen the same issue with other languages that started simple, and then becomes monsters of feature sets. D comes to mind.

So when you see the codebase between developers, the different styles because of the use of different feature sets, creates this disconnect and makes it harder for people to read other code. While with C, because of the language limits, your more often down a rather easier way to read the same code. If that makes sense?


Proofs of what? "This new feature should make the 18 to 21 year old demographic happy by aligning with popular cultural norms". This would be difficult to formalize as a proof.


Memory safety in particular, actually UB in general (got to watch out for integer overflows, among other things). But one could prove arbitrary properties, including lack of panics (would have been helpful for a recent Cloudflare outage), etc.

In order to prove lack of UB, you have to be able to reason about other things. For example, to safely call qsort, you have to prove that the comparison is a total order. That's not easy, especially if comparing larger and more complicated structures with pointers.

And of course, proving the lack of pointer aliasing in C is extremely difficult, even more so if pointer arithmetic is employed.


In this context it's proofs of properties about the program you're writing. A classic one is that any lossless compression algorithm should satisfy decompress(compress(x)) == x for any x.


That's because the 1 instruction variant may read past the end of an array. Let's say s is a single null byte at 0x2000fff, for example (and that memory is only mapped through 0x2001000); the function as written is fine, but the optimized version may page fault.


Ah, yes, good point. I think this is a nice example of "I didn't notice I needed to tell the compiler a thing I know so it can optimize".



Here's the other side, for what it's worth: https://news.tuxmachines.org/n/2025/11/20/Today_s_Judgement....


A grim portent for their mental health, given the attempt to reframe a judgement that demolished them and called them "character assassins" as supportive.

Really, though, this is the first time I've ever looked at TechRights for real, and the whole place is very... Always Sunny meme.



Unfortunately graphics APIs suck pretty hard when it comes to actually sharing memory between CPU and GPU. A copy is definitely required when using WebGPU, and also on discrete cards (which is what these APIs were originally designed for). It's possible that using native APIs directly would let us avoid copies, but we haven't done that.


Thanks for the pointer, we were not actually aware of this, and the claimed benchmark numbers look really impressive.


there were at least two renderers written for the CM2 that used strips. at least one of them used scans and general communication, most likely both.

1) for the given processor set, where each process holds an object 'spawn' a processor in a new set, one processor for each span. (a) spawn operation consists of the source processor setting the number of nodes in the new domain, then performing an add-scan, then sending the total allocation back to the front end the front end then allocates a new power-of-2 shape than can hold those the object-set then uses general communication to send scan information to the first of these in the strip-set (the address is left over from the scan) (b) in the strip-set, use a mask-copy-scan to get all the parameters to all the the elements of the scan set. (c) each of these elements of the strip set determine the pixel location of the leftmost element (d) use a general send to seed the strip with the parameters of the strip (e) scan those using a mask-copy-scan in the pixel-set (f) apply the shader or the interpolation in the pixel-set

note that steps (d) and (e) also depend on encoding the depth information in the high bits and using a max combiner to perform z-buffering.

Edit: there must have been an additional span/scan in a pixel space that is then sent to image space with z buffering, otherwise strip seeds could collide, and be sorted by z which may miss pixels from the losing strip


What's a CM2? I tried searching combined with some graphics related keywords but I just go weird stuff.


Given the focus on parallelism and communication, maybe the Connection Machine 2?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: