Hacker Newsnew | past | comments | ask | show | jobs | submit | est's commentslogin

svg?

It obviously requires a lot of figuring out (by better men than me) but it seems a worthwhile adventure.

The halting problem can just be figured out experimentally in production. Users might need to earn or lose privileges over time as things escalate. Admins might approve a script and allow running it without prompts or pressing the button.

Perhaps entire applications can be written this way. I much enjoyed old php code with discussions in threaded comments. (The cache will save us.)

Reminds me of Richard

https://stackoverflow.com/questions/184618/what-is-the-best-...


> the halting problem can just be figured out experimentally in production That's a wild statement

this is much better and informative.

TIL the Rhine–Main–Danube Canal. Thanks


well people doesn't just comment to say thanks or appreciation.

Mostly people only comment when there's something wrong


I really wish Bsky/mastodon has something like RSS so I can static host them for both publishing and aggregating .

Doesn't bsky support rss: profile_url/rss ?

For example: This is rss for Simon Willison: https://bsky.app/profile/simonwillison.net/rss


I wish I could host the RSS and the bsky network can accept them.

I believe CF Page is faster.

CF pages is built on top of workers, you can serve static html assets from either of them too.

> Virtual environments required

This bothers me more than once when building a base docker image. Why would I want a venv inside a docker with root?


The old package managers messing up the global state by default is the reason why Docker exists. It's the venv for C.


Because a single docker image can run multiple programs that have mutually exclusive dependencies?

Personally I never want program to ever touch global shared libraries ever. Yuck.


> a single docker image can run multiple programs

You absolutely can. But it's not best practice.

https://docs.docker.com/engine/containers/multi-service_cont...


God I hate docker so much. Running computers does not have to be so bloody complicated.


> Is it an argument against wrapping frameworks, or wrapping libraries?

I think English is not OP's first language, framework here basically means wrappers.


That's not how i understood it. Wrappers can typically be unwrapped. Frameworks are a lot harder to circumvent.


reminds me of something similar

https://news.ycombinator.com/item?id=46321651

e.g. serve .svg only when "Sec-Fetch-Dest: image" header is present. This will stop scripts


Or sending Content-Security-Policy: script-src 'none' for everything that isn’t intended to be a document. Or both.

IMO it’s too bad that suborigins never landed. It would be nice if Discord’s mintlify route could set something like Suborigin: mintlify, thus limiting the blast radius to the mintlify section.


maybe adding a dedicated cookie for that specific path?


HTTP-only cookies ought to work fine for this.

I imagine there’s a fair amount of complexity that would need to be worked out, mostly because the browser doesn’t know the suborigin at the time it makes a request. So Sec-Fetch-Site and all the usual CORS logic would not be able to respect suborigins unless there was a pre-flight check for the browser to learn the suborigin. But this doesn’t seem insurmountable: a server using suborigins would know that request headers are sent as if the request were aimed at the primary origin, and there could be some CORS extensions to handle the case where the originating document has a suborigin.


I wonder why are dofiles have to be on remote machines?

e.g. I type an alias, the ssh client expands it on my local machine and send complex commands to remote. Could this be possible?

I suppose a special shell could make it work.


> I wonder why are dofiles have to be on remote machines?

Because the processes that use them run on the remote machines.

> I type an alias, the ssh client expands it on my local machine and send complex commands to remote.

This is not how SSH works. It merely takes your keystrokes and sends them to the remote machine, where bash/whatever reads and processes them.

Of course, you can have it work the way you imagine, it's just that it'd require a very special shell on your local machine, and a whole RAT client on the remote machine, which your special shell should be intimately aware about. E.g. TAB-completion of files would involve asking the remote machine to send the dir contents to your shell, and if your alias includes a process substitution... where should that process run?


> the processes that use them run on the remote

Yes but but does the process have to read from a file system dotfile, instead of some data fetched over a ssh connection?

> your alias includes a process substitution

Very valid point. How about a special shell only provides sys calls and process substitution on remote, the rest runs on local client, and communicate via ssh?

I understand this will make client "fat" but it's way more portable.


> Yes but but does the process have to read from a file system dotfile, instead of some data fetched over a ssh connection?

Well, no. But if you didn't write that program (e.g. bash or vim), you're stuck with what their actual logic is. Which is "read a file from the filesystem". You can, of course, do something like mounting your local home directory onto the remote's filesystem (hopefully, read-only)... But in the end of the day, there are still two separate machines, and you have to mend the divide somehow, and it'll never be completely pretty, I'm afraid.

> How about a special shell only provides sys calls and process substitution on remote.

Again, as I said, lots of RATs exist, not all of them malicious. But to make "the rest runs on local client" you need to write what essentially will end up a "purely remote-only shell". Essentially, all the parts of bash that manage parsing, user interaction and internal state tracking but without actual process management. Perhaps it's a good idea, actually; but untangling the mess of bash source is not going to be easy.

The current solution of "have a completely normal, standard shell run on the remote and stretch the terminal connection to it over the network" is Good Enough for most of people. Which is not surprising given that that's the environment in which UNIX and its shell were originally implemented.


> I suppose a special shell could make it work.

Working on it! :)

Remote machines usually don’t need to know your keystrokes or handle your line editing, either. There’s a lot of latency to cut out, local customization to preserve, and protocol simplification to be had.


I wonder if there's any reverse zip-bombs? e.g. A realy big .zip file, takes long time to unzip, but get only few bytes of content.

Like bomb the CPU time instead of memory.


Trivially. Zip file headers specify where the data is. All other bytes are ignored.

That's how self extraction archives and installers work and are also valid zip files. The extractor part is just a regular executable that is a zip decompresser that decompresses itself.

This is specific to zip files, not the deflate algorithm.


There are also deflate-specific tricks you can use - just spam empty non-final blocks ad infinitum.

    import zlib
    zlib.decompress(b"\x00\x00\x00\xff\xff" * 1000 + b"\x03\x00", wbits=-15)
If you want to spin more CPU, you'd probably want to define random huffman trees and then never use them.


I had claude implement the random-huffman-trees strategy and it works alright (~20MB/s decompression speed), but a minimal huffman tree that only encodes the end symbol works out even slower (~10MB/s), presumably because each tree is more compact.

The minimal version boils down to:

    bytes.fromhex("04c001090000008020ffaf96") * 1000000 + b"\x03\x00"


That would be a big zip file, but would not take a long time to unzip.


Isn't that mathematically impossible?


I'm pretty sure it's mathematically guaranteed that you have to be bad at compressing something. You can't compress data to less than its entropy, so compressing totally random bytes (where entropy = size) would have a high probability of not compressing at all, if no identifiable patterns appear in the data by sheer coincidence. Establishing then that you have incompressible data, the least bad option would be to signal to the decompressor to reproduce the data verbatim, without any compression. The compressor would increase the size of the data by including that signal somehow. Therefore there is always some input for a compressor that causes it to produce a larger output, even by some miniscule amount.


Why's that? I'm not really sure how DEFLATE works but I can imagine a crappy compression that's like "5 0" means "00000". So if you try to compress "0" you get "1 0" which is longer than the input. In fact, I bet this is true for any well-compressed format. Like zipping a JpegXL image will probably yield something larger. Much larger.. I don't know how you do that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: