Hacker Newsnew | past | comments | ask | show | jobs | submit | ofek's commentslogin

I'd encourage folks to read the recently-published statement [1] about the state of OpenSSL from Python's cryptography project.

[1]: https://news.ycombinator.com/item?id=46624352


We're recording a Security Cryptography & Whatever with them in an hour or so, if anyone's got questions they want us asking Alex and Paul.

True facts: Paul co-created Frinkiac.


Instead of everybody switching to LibreSSL, we had the Linux Foundation reward OpenSSL's incompetence with funding.

We are still suffering from that mistake, and LibreSSL is well-maintained and easier to migrate to than it ever was.

What the hell are we waiting for?

Is nobody at Debian, Fedora or Ubuntu able to step forward and set the direction?


How about standardizing on an API, then switching backends can be up to the administrator of the machine.

LibreSSL has put serious effort into making itself interchangeable with openssl.

OpenSSL as the incumbent has no incentive to do the same.


Indeed, I introduced this [0] as an option for every Python integration that runs on the Datadog Agent.

[0]: https://github.com/DataDog/integrations-core/pull/12986


The post is entirely authentic; it matches the writing style of the author from before the LLM boom and discusses work that the human author recently released. Can you pinpoint what makes you feel that way?

edit: I asked for explanation before the post was edited to expand on that. I disagree but am sympathetic to the weariness of reading content now that AI use is widespread.


Your C extension guide looks very useful and I quite like the foreword/history behind it. Have you considered updating the resource to account for the freethreaded mode (which will eventually become the default) on 3.14+?


For sure, it is on my TODO list. It takes a lot of work to explore each new Python C API and I'll get around to it when I can.


This is awesome, thanks a lot! I'm going to introduce this in the test suites of the extension modules I maintain [0][1] and, if all goes well, eventually at work [2].

I really appreciate the support for Windows as that platform is currently underserved [3] when it comes to such memory tooling.

[0]: https://github.com/jcrist/msgspec

[1]: https://github.com/ofek/coincurve

[2]: https://github.com/DataDog/integrations-core

[3]: https://github.com/bloomberg/memray


> pip could implement parallel downloads, global caching, and metadata-only resolution tomorrow. It doesn’t, largely because backwards compatibility with fifteen years of edge cases takes precedence.

pip is simply difficult to maintain. Backward compatibility concerns surely contribute to that but also there are other factors, like an older project having to satisfy the needs of modern times.

For example, my employer (Datadog) allowed me and two other engineers to improve various aspects of Python packaging for nearly an entire quarter. One of the items was to satisfy a few long-standing pip feature requests. I discovered that the cross-platform resolution feature I considered most important is basically incompatible [1] with the current code base. Maintainers would have to decide which path they prefer.

[1]: https://github.com/pypa/pip/issues/13111


> pip is simply difficult to maintain. Backward compatibility concerns surely contribute to that but also there are other factors, like an older project having to satisfy the needs of modern times.

Backwards compatibility is the one thing that prevents the code in an older project from being replaced with a better approach in situ. It cannot be more difficult than a rewrite, except that rewrites (arguably including my project) may hold themselves free to skip hard legacy cases, at least initially (they might not be relevant by the time other code is ready).

(I would be interested in hearing from you about UX designs for cross-platform resolution, though. Are you just imagining passing command-line flags that describe the desired target environment? What's the use case exactly — just making a .pylock file? It's hard to imagine cross-platform installation....)


This release wouldn’t have been possible without Cary, our new co-maintainer. He picked up my unfinished workspaces branch and made it production-ready, added SBOM support to Hatchling, and landed a bunch of PRs from contributors!

My motivation took a big hit last year, in large part due to improper use of social media: I simply didn’t realize that continued mass evangelism is required nowadays. This led to some of our novel features being attributed to other tools when in fact Hatch was months ahead. I’m sorry to say that this greatly discouraged me and I let it affect maintenance. I tried to come back on several occasions but could only make incremental progress on the workspaces branch because I had to relearn the code each time. I’ve been having to make all recent releases from a branch based on an old commit because there were many prerequisite changes that were merged and couldn’t be released as is.

Development will be much more rapid now, even better than the way it used to be. We are very excited for upcoming features!


The sentiment in this thread surprises me a great deal. For me, Gemini 2.5 Pro is markedly worse than GPT-5 Thinking along every axis of hallucinations, rigidity in its self-assured correctness and sycophancy. Claude Opus used to be marginally better but now Claude Sonnet 4.5 is far better, although not quite on par with GPT-5 Thinking.

I frequently ask the same question side-by-side to all 3 and the only situation in which I sometimes prefer Gemini 2.5 Pro is when making lifestyle choices, like explaining item descriptions on Doordash that aren't in English.

edit: It's more of a system prompt issue but I despise the verbosity of Gemini 2.5 Pro's responses.


I've found Gemini to be much better at completing tasks and following instructions. For example, let's say I want to extract all the questions from a word document and output them as a CSV.

If I ask ChatGPT to do this, it will do one of two things:

1) Extract the first ~10-20 questions perfectly, and then either just give up, or else hallucinate a bunch of stuff.

2) Write code that tries to use regex to extract the questions, which then fails because the questions are too free-form to be reliably matched by a regex.

If I ask Gemini to do the same thing, it will just do it and output a perfectly formed and most importantly complete CSV.


For writing code at least this has been exactly my experience. GPT5 is the best but slow. Sonnet 4.5 is a few notches below but significantly faster and good enough for a lot of things. I have yet to get a single useful result from Gemini.


Here's an example of Gemini 2.5 Pro hallucinating, which happens so much that I don't trust it https://gemini.google.com/share/99a1be550763


Yep, I agree. Gpt 5 thinking is by far the best reasoning model ime. Gemini 2.5 pro is worse in pretty much everything.


This has been pretty much exactly my experience.


My honest belief is that they’re are bots. I also find 2.5 worse.


The Jellyfin metadata would certainly be a fit but what about streaming video content i.e. sequential reads of large files with random access?


If you have the network that matches, it should be perfectly fine.


For those who are unaware, there is another project [1] that tracks upstream which adds support for various codecs like Zstandard. Many folks (such as myself) opt to install their releases instead.

[1]: https://github.com/mcmilk/7-Zip-zstd


I prefer NanaZip[1]. It has all the features of the ZS and NSIS fork while being fully compatible with the new Windows context menus.

[1]: https://github.com/M2Team/NanaZip


Perhaps a tangent, but until now, I've only seen or used "codec" in the audio/video sense. While somehow awkward, it seems this would also be correct, since it also compresses and decompresses. Video codec but archive format.

Sometimes you see a word used a new way and wonder if you've just been wrong all these years.


The defining factor isn't compression/decompression, it's just encoding.

You'll see codec used in things like text encoding.


While technically true, the term has been largely co-opted by the A/V realm. It’s pretty rare to hear outside of the context.


Rare for people who don't deal with encoding and decoding maybe.

To be clear the codec implements the compression (or other encoding) algorithm. So when talking about codec's we mean the implementation. But when talking about the algorithm, we are talking about the standard of encoding the encoder or decoder implements.


> Rare for people who don't deal with encoding and decoding maybe.

No, rare in general.

It's a layman's co-opting, and laymen outnumber specialists in every field.


You commonly encounter the term when you want to do bytes to string conversions and vice versa on Python: https://docs.python.org/3/library/codecs.html


Note that the official 7z build supports zstd compression since version 24: https://github.com/ip7z/7zip/releases/tag/24.05


Only decompression, not compression.


There is also NanaZip which aims to be a more modern Windows application and I think also incorporates the additions of the 7zs fork https://github.com/M2Team/NanaZip


Maybe it's just me but I got weird feelings seeing 7-Zip-zstd repo having more stars than it's upstream.


Github is just a mirror used to post sources of releases and track bugs. It has only 11 commits so far.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: