I remember building really complex layouts w nested tables, and learning the hard way that going beyond 6 levels of nesting caused serious rendering performance problems in Netscape.
I remember seeing a co-worker stuck on trying to debug Netscape showing a blank page. When I looked at it, it wasn’t showing a blank page per se, it was just taking over a minute to render tables nested twelve deep. I deleted exactly half of them with no change to the layout or functionality, and it immediately started rendering in under a second.
Hacker News uses nesting tables for comments. This comment that you're reading right now is rendered within a table that has three ancestor tables.
As late as 2016 (possibly even later), they did so in a way that resulted in really tiny text when reading comments on mobile devices in threads that were more than five or so layers deep. That isn't the case anymore - it might be because HN updated the way it generates the HTML, though it could also be that browser vendors updated their logic for rendering nested tables as well. I know that it was a known problem amongst browser developers, because most uses for nested tables were very different than what HN was (is?) using them for, so making text inside deeply nested tables smaller was generally a desirable feature... just not in the context of Hacker News.
Upromise. com -- a service for helping families save $ for college. Those layouts, which I painstakingly hand-crafted in HTML, caused the CTO to say "I didn't know you could do that with HTML", and was served to the company's first 10M customers.
That's a fun trick, but please consider adding ARIA roles (e.g. role="presentation" to <table>, role="heading" aria-level="[number]" to the <font> elements used for headings) to make your site understandable by screen readers.
I'm on Firefox and when I right click and open image in new tab I see an svg file with pale blue text colour and cut-off lettering. The source of the svg suggests that the letters are drawn paths rather than a font.
Saving the svg file down and loading into Inkscape shows a grouped object with a frame and then letter forms. The letter forms are not fonts but a complete drawn path. So I think the chopping off of the descenders is a deliberate choice (which is fine if that is what's wanted).
The whole page looks narrow and long on my landfill android phone so the content is in the middle third of the browser but can pinch-zoom ok onto each 'cell' or section of text or the graphs.
Thanks to tirreno and reconnecting for posting this interesting page markup.
Responsive layout would be the biggest reason (mobile for one, but also a wider range of PC monitor aspect ratios these days than the 4:3 that was standard back then), probably followed by conflating the exact layout details with the content, and a separation of concerns / ease of being able to move things around.
I mean, it's a perfectly viable thing if these are not requirements and preferences that you and your system have. But it's pretty rare these days that an app or site can say "yeah, none of those matter to me the least bit".
It was relatively OK to deal with when the pages were created by coders themselves.
But then DreamWeaver came out, where you basically drew the entire page in 2D and it spat out some HTML tables that stitched it all back together again, and the freedom it gave our artists in drawing in 2D and not worrying about the output meant they went completely overboard with it and you'd get lots of tiny little slices everywhere.
Definitely glad those days are well behind us now!
wasn't it Fireworks that sliced the image originally. you'd then be able to open that export into Dreamworks for additional work. I didn't do that kind of design very long. Did Dreamworks get updated to allow the slicing directly bypassing Fireworks?
You jest, but it took forever to add somewhat intuitive layout mechanism to css which allowed you to do what could be done easily with html tables. Vertically centering a div inside another was really hard, and very few people understood the techniques you would use, instead of blindly copying them.
It was beyond irony that the recommended solution was to tell the browser to render your divs as a table.
The author said he had the assets and gave them to Claude. It would be obvious if he had one large image for all the planets instead of individual ones.
> Traditional tool calling creates two fundamental problems as workflows become more complex:
> Context pollution from intermediate results: When Claude analyzes a 10MB log file for error patterns, the entire file enters its context window, even though Claude only needs a summary of error frequencies. When fetching customer data across multiple tables, every record accumulates in context regardless of relevance. These intermediate results consume massive token budgets and can push important information out of the context window entirely.
> Inference overhead and manual synthesis: Each tool call requires a full model inference pass. After receiving results, Claude must "eyeball" the data to extract relevant information, reason about how pieces fit together, and decide what to do next—all through natural language processing. A five tool workflow means five inference passes plus Claude parsing each result, comparing values, and synthesizing conclusions. This is both slow and error-prone.
Basically, instead of Claude trying to, e.g., process data by using inference from its own context, it would offload to some program it specifically writes. Up until today we've seen Claude running user-written programs. This new paradigm allows it the freedom to create a program it finds suitable in order to perform the task, and then run it (within confines of a sandbox) and retrieve the result it needs.
I've had it happen almost every time I try to give them another shot. It presents a snippet of my code, claims there's a bug due to an unhandled edge case, and completely miss the (literally) very next line that specifically handles the edge-case it mentioned.
> I speculate that within a few months, the communities will have settled on a single dominant one.
The solutions on the roadmap are not centralized as GitHub. There is a real initiative to promote federation so we would not need to rely on one entity.
I love this, and hope it works out this way. Maybe another way to frame it: In 2 years, what will the "Learn Python for Beginners" tutorials direct the user towards? Maybe there will not be a consensus, but my pattern-matching brain finds one!
Thanks! I used perf to look at cache miss rates and memory bandwidth during runs. The measurements showed the pattern I expected, but I didn't do a rigorous profiling study (different cache sizes, controlled benchmarks across architectures, or proper statistical analysis).
This was for a university exam, and I ran out of time to do it properly. The cache argument makes intuitive sense (three vectors cycling vs. scanning a growing n×k matrix), and the timing data supports it, but I'd want to instrument it more carefully in the future :)
I agree with you, but also I want to ask if I do understand this correctly: there was a paradigm in which we were aiming for Small Language Models to perform specific types of tasks, orchestrated by the LLM. That is what I perceived the MCP architecture came to standardize.
But here, it seems more like a diamond shape of information flow: the LLM processes the big task, then prompts are customized (not via LLM) with reference to the Skills, and then the customized prompt is fed yet again to the LLM.
What is there not to _get_, honestly? And why is jj so easier to get?
The author seems to focus on how great it is to make changes to your commit history locally, and that you shouldn't worry because it's not pushed yet.
The thing is, I don't want automatic. Automatic sucks. The point of version control is that I am able to curate my changes. The guards and rails of git is what makes me feel safe.
I am still failing to see why JJ is superior to git, or whatever.
There are some convention people follow when working with git to make it safe to use. But those aren't git's features -- they are ways to avoid confusion.
If you don't want automatic, you shouldn't use git. It does too many things automatically, like update your branches' heads whenever you commit, for example.
And if I don't want that I can detach the HEAD. This isn't to much different. The only thing that changes by using branches is that you have a nice name, it prevents the commits from being GCed and it provides a default name on push.