Hacker Newsnew | past | comments | ask | show | jobs | submit | manbash's commentslogin

Aw shucks I was almost pulled for another round of Star Fox 64. Unfortunately it's not there.


Ah, those days, where you would slice your designs and export them to tables.


I remember building really complex layouts w nested tables, and learning the hard way that going beyond 6 levels of nesting caused serious rendering performance problems in Netscape.


I remember seeing a co-worker stuck on trying to debug Netscape showing a blank page. When I looked at it, it wasn’t showing a blank page per se, it was just taking over a minute to render tables nested twelve deep. I deleted exactly half of them with no change to the layout or functionality, and it immediately started rendering in under a second.


Six nesting levels for tables? Cool, what were you making?


> Six nesting levels for tables?

Hacker News uses nesting tables for comments. This comment that you're reading right now is rendered within a table that has three ancestor tables.

As late as 2016 (possibly even later), they did so in a way that resulted in really tiny text when reading comments on mobile devices in threads that were more than five or so layers deep. That isn't the case anymore - it might be because HN updated the way it generates the HTML, though it could also be that browser vendors updated their logic for rendering nested tables as well. I know that it was a known problem amongst browser developers, because most uses for nested tables were very different than what HN was (is?) using them for, so making text inside deeply nested tables smaller was generally a desirable feature... just not in the context of Hacker News.


Upromise. com -- a service for helping families save $ for college. Those layouts, which I painstakingly hand-crafted in HTML, caused the CTO to say "I didn't know you could do that with HTML", and was served to the company's first 10M customers.


Why not! We did this in 2024 for our website (1) to have zero CSS.

Still works, only Claude can not understand what those tables means.

1. https://www.tirreno.com


That's a fun trick, but please consider adding ARIA roles (e.g. role="presentation" to <table>, role="heading" aria-level="[number]" to the <font> elements used for headings) to make your site understandable by screen readers.


Your logo gets cut off in Firefox https://i.ibb.co/kbj5vw7/image.png


I'm on Firefox and when I right click and open image in new tab I see an svg file with pale blue text colour and cut-off lettering. The source of the svg suggests that the letters are drawn paths rather than a font.

Saving the svg file down and loading into Inkscape shows a grouped object with a frame and then letter forms. The letter forms are not fonts but a complete drawn path. So I think the chopping off of the descenders is a deliberate choice (which is fine if that is what's wanted).

The whole page looks narrow and long on my landfill android phone so the content is in the middle third of the browser but can pinch-zoom ok onto each 'cell' or section of text or the graphs.

Thanks to tirreno and reconnecting for posting this interesting page markup.


> Why not!

Responsive layout would be the biggest reason (mobile for one, but also a wider range of PC monitor aspect ratios these days than the 4:3 that was standard back then), probably followed by conflating the exact layout details with the content, and a separation of concerns / ease of being able to move things around.

I mean, it's a perfectly viable thing if these are not requirements and preferences that you and your system have. But it's pretty rare these days that an app or site can say "yeah, none of those matter to me the least bit".


I learned recently that this is still how a lot of email html get generated.


Apparently Outlook (the actual one, not the recent pretender) still uses some ancient WordHTML version as the renderer, so there isn’t much choice.


Fun fact: until Office 2007, outlook used IE’s engine for rendering html.


Oh yeah, recently I had to update a newsletter design like that and older versions of outlook still didn’t render properly.


It was relatively OK to deal with when the pages were created by coders themselves.

But then DreamWeaver came out, where you basically drew the entire page in 2D and it spat out some HTML tables that stitched it all back together again, and the freedom it gave our artists in drawing in 2D and not worrying about the output meant they went completely overboard with it and you'd get lots of tiny little slices everywhere.

Definitely glad those days are well behind us now!


wasn't it Fireworks that sliced the image originally. you'd then be able to open that export into Dreamworks for additional work. I didn't do that kind of design very long. Did Dreamworks get updated to allow the slicing directly bypassing Fireworks?


I yearn for those days. CSS was a mistake. Tables and DHTML is all one needs.


You jest, but it took forever to add somewhat intuitive layout mechanism to css which allowed you to do what could be done easily with html tables. Vertically centering a div inside another was really hard, and very few people understood the techniques you would use, instead of blindly copying them.

It was beyond irony that the recommended solution was to tell the browser to render your divs as a table.


CSS was a mistake? JavaScript was a mistake, specifically JavaScript frameworks.


JavaScript? HTML and HTTP were the real mistakes.


HTML and HTTP? TCP was the real mistake.


"In the beginning the universe was created. This made a lot of people angry and has widely been considered as a bad move."


Gosh, there was a website, where you submit a PSD + payment, and they spit out a sliced design. Initially tables, later, CSS. Life saver.


Y Combinator funded one such company, MarkupWand.[1] A friend is one of the co-founders.

1. https://www.ycombinator.com/companies/markupwand


And use a single px invisible gif to move things around.

But was Space Jam using multiple images or just one large image with and image map for links?


The author said he had the assets and gave them to Claude. It would be obvious if he had one large image for all the planets instead of individual ones.


Oh man, Photoshop still has the slice feature and it makes the most horrendous table-based layout possible. It's beautiful.


How else can you take responsibility if you don't make it public? You can't have integrity if you hide away your faults.


Under "Programmatic Tool Calling"

> The challenge

> Traditional tool calling creates two fundamental problems as workflows become more complex:

> Context pollution from intermediate results: When Claude analyzes a 10MB log file for error patterns, the entire file enters its context window, even though Claude only needs a summary of error frequencies. When fetching customer data across multiple tables, every record accumulates in context regardless of relevance. These intermediate results consume massive token budgets and can push important information out of the context window entirely.

> Inference overhead and manual synthesis: Each tool call requires a full model inference pass. After receiving results, Claude must "eyeball" the data to extract relevant information, reason about how pieces fit together, and decide what to do next—all through natural language processing. A five tool workflow means five inference passes plus Claude parsing each result, comparing values, and synthesizing conclusions. This is both slow and error-prone.

Basically, instead of Claude trying to, e.g., process data by using inference from its own context, it would offload to some program it specifically writes. Up until today we've seen Claude running user-written programs. This new paradigm allows it the freedom to create a program it finds suitable in order to perform the task, and then run it (within confines of a sandbox) and retrieve the result it needs.


Claude Code moved to partial file reads over the summer.

Super premature optimization. It’ll hallucinate what lines it needs to read, it’ll continuously miss critical context in favor of trimming tokens.

Luckily we can now hook and force the agent to read full files at least once.


I've had it happen almost every time I try to give them another shot. It presents a snippet of my code, claims there's a bug due to an unhandled edge case, and completely miss the (literally) very next line that specifically handles the edge-case it mentioned.


Thanks for the reply.


> I speculate that within a few months, the communities will have settled on a single dominant one.

The solutions on the roadmap are not centralized as GitHub. There is a real initiative to promote federation so we would not need to rely on one entity.


I love this, and hope it works out this way. Maybe another way to frame it: In 2 years, what will the "Learn Python for Beginners" tutorials direct the user towards? Maybe there will not be a consensus, but my pattern-matching brain finds one!


Nice work. I have gone through the fairly straightforward paper.

May I ask what you've used to confirm the cache hit/miss rate? Thanks!


Thanks! I used perf to look at cache miss rates and memory bandwidth during runs. The measurements showed the pattern I expected, but I didn't do a rigorous profiling study (different cache sizes, controlled benchmarks across architectures, or proper statistical analysis).

This was for a university exam, and I ran out of time to do it properly. The cache argument makes intuitive sense (three vectors cycling vs. scanning a growing n×k matrix), and the timing data supports it, but I'd want to instrument it more carefully in the future :)


I agree with you, but also I want to ask if I do understand this correctly: there was a paradigm in which we were aiming for Small Language Models to perform specific types of tasks, orchestrated by the LLM. That is what I perceived the MCP architecture came to standardize.

But here, it seems more like a diamond shape of information flow: the LLM processes the big task, then prompts are customized (not via LLM) with reference to the Skills, and then the customized prompt is fed yet again to the LLM.

Is that the case?


Right in the first paragraph.

> Needless to say, I just don’t get git.

What is there not to _get_, honestly? And why is jj so easier to get?

The author seems to focus on how great it is to make changes to your commit history locally, and that you shouldn't worry because it's not pushed yet.

The thing is, I don't want automatic. Automatic sucks. The point of version control is that I am able to curate my changes. The guards and rails of git is what makes me feel safe.

I am still failing to see why JJ is superior to git, or whatever.


Hmm, what guards and rails?

There are some convention people follow when working with git to make it safe to use. But those aren't git's features -- they are ways to avoid confusion.


It's not that it is superior, it is completely inferior to git :) That is why you are failing to see :)


JJ rebase coupled with committable conflicts is very much superior to git.


If you don't want automatic, you shouldn't use git. It does too many things automatically, like update your branches' heads whenever you commit, for example.


And if I don't want that I can detach the HEAD. This isn't to much different. The only thing that changes by using branches is that you have a nice name, it prevents the commits from being GCed and it provides a default name on push.


Don't you have a 2FA Recovery Code?


Far too many of the critical services (banks) still only offer SMS 2FA.


For most of them. It's a tool, but not a silver bullet


> They don’t prevent the attacker from pivoting a memory safety bug to remote execution.

I'm confused. Isn't this potentially preventing some classes of memory-safety bugs?


No, it’s not


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: