Hacker Newsnew | past | comments | ask | show | jobs | submit | mitchellh's commentslogin

To be clear, I think discussions on the whole as a product are pretty bad. I'm not happy having to use them, but given my experience trying different approaches across multiple "popular" projects on GH, this approach has so far been the least bad. Although I'm still sad about it.

> Yeah but a good issue tracker should be able to help you filter that stuff out.

Agreed. This highlights GitHub's issue management system being inadequate.

(Note: I'm the creator/lead of Ghostty)


Because I don't want my default view to be "triage." If GitHub allowed default issue views (and reflected that in the issue count in the tabs as well), then maybe. But currently, it doesn't work. I've tried it at large project scale across many (multiple projects with more than 20K stars and millions of downloads).

Compared to that, this system has been a huge success. It has its own problems, but it's directionally better.


Can't you just set a bookmark with the filters you want?

How does different count affect search? You can use a bookmark to change the view, and if using bookmarks is too much, ok, but that's not a bold universal reason.

(also, what is "huge success" in methods of organizing issues?)

bookmark: (and if your browser supports shortcuts, it can be as easy to open as remembering to type a single char)

https://github.com/ghostty-org/ghostty/issues?q=is%3Aissue%2...


Here is an abridged set of reasons, just because it quickly turns into a very big thing:

1. The barrier to mislabel is too low. There is no confirmation to remove labels. There is no email notification on label change. We've had "accepted" issues accidentally lose their accepted label and enter the quagmire of thousands of unconfirmed issues. Its lost. In this new approach, every issue is critical and you can't do this. You can accidentally _close_ it, but that sends an email notification. This sounds dumb, but it happens, usually due to keyboard shortcuts.

2. The psychological impact of the "open issue count" has real consequences despite being meaningless on its own. People will see a project with 1K+ issues and think "oh this is a buggy hell hole" when 950 of those issues are untriaged, unaccepted, 3rd party issues, etc.

My practical experience with #2 was Terraform ~5 years ago (when I last worked on it, can't speak to the current state). We had something like 1,800 open issues and someone on Twitter decided to farm engagement and dunk on it and use that as an example of how broken it is. It forced me to call for a feature freeze and full on-hands triage. We ultimately discovered there were ~5 crashing bugs, ~50 or so core bugs, ~100 bugs in providers we control, and the rest were 3rd party provider bugs (which we accepted in our issue tracker at the time) or unaccepted/undesigned features or unconfirmed bugs (no reproduction).

With the new approach, these are far enough away that it gets rid of this issue completely.

3. The back-and-forth process of confirming a bug or designing and accepting a feature produces a lot of noise that is difficult to hide within an issue. You can definitely update the original post but then there might be 100 comments below that you have to individually hide or write tooling to hide, because ongoing status update discussions may still be valuable.

This is also particularly relevant in today's era of AI where well written GH issues and curated comments produce excellent context for an agent to plan and execute. But, if you don't like AI you can ignore that and the point is still valid... for people!

By separating out the design + accept into two separate posts, it _forces_ you to rewrite the core post and shifts the discussion from design to progress. I've found it much cleaner and I'm very happy about this.

4. Flat threads don't work well for issue discussion. You even see this in traditional OSS that uses mailing lists (see LKML): they form a tree of responses! Issues are flat. Its annoying. Discussions are threaded! And that is very helpful to chase down separate chains of thought, or reproductions, or possibly unrelated issues or topics.

Once an issue is accepted, the flat threads work _fine_. I'd still prefer a tree, but it's a much smaller issue. :)

-----------

Okay I'm going to stop there. I have more, many more, but I hope this gives you enough for you to empathize a bit that there are some practical issues, and this is something I've both thought of critically for and tried for over a decade.

There's a handful of people in this thread who are throwing around words like "just" or "trivially" or just implying how obvious a simple solution looks without perhaps accepting that I've been triaging and working on GH issues in large open projects full-time non-stop for the last 15 years. I've tried it, I promise!

This is completely a failure of GitHub's product suite and as I noted in another comment I'm not _happy_ I have to do this. I don't think discussions are _good_. They're just the _least bad_ right now, unfortunately.


>2. The psychological impact of the "open issue count" has real consequences despite being meaningless on its own. People will see a project with 1K+ issues and think "oh this is a buggy hell hole" when 950 of those issues are untriaged, unaccepted, 3rd party issues, etc.

Fully agree with this; as a beginner in the space I get nervous when I see a project having a thousand open issues since 2018.


Point #2 is very true for me. I get concerned when I look at a project and see thousands of issues. For safety and to not waste my time, if I see that and the project is an unknown to me, I just have to believe that it indicates the project is not being maintained properly.

I definitely think splitting discussion and issues is a good idea for that reason alone.


I usually try to scan the issues and look at a few, to get a vibe of what the makeup of that number is. It can be really attractive to try to just compare project metrics like that, though.

Sounds like maybe the next project should be a better bug report/issue tracker

I recommend posting (or lining to) this somewhere on GitHub, maybe in the pinned issue.

Or as a discussion lol

Not to mention you can restrict who can file issues with permissions. So you have a forcing function, whereas hoping tags are correctly applied is a never ending battle.

Seems like a pretty huge cost just because you don't want to create a bookmark...

Note that this is an active discussion where we're trying to get to a point of clarity where we can promote to an issue (when it is actionable). The discussion is open and this is the system working as intended!

I want to clarify though that there isn't a known widespread "memory leak issue." You didn't say "widespread", but just in case that is taken by anyone else. :) To clarify, there are a few challenges here:

1. The report at hand seems to affect a very limited number of users (given the lack of reports and information about them). There are lots of X meme posts about Ghostty in the macOS "Force Close" window using a massive amount of RAM but that isn't directly useful because that window also reports all the RAM _child processes_ are using (e.g. if you run a command in your shell that consumes 100 GB of RAM, macOS reports it as Ghostty using 100 GB of RAM). And the window by itself also doesn't tell us what you were doing in Ghostty. It farms good engagement, though.

2. We've run Ghostty on Linux under Valgrind in a variety of configurations (the full GUI), we run all of Ghostty's unit tests under Valgrind in CI for every commit, and we've run Ghostty on macOS with the Xcode Instruments leak checker in a variety of configurations and we haven't yet been able to find any leaks. Both of these run fully clean. So, the "easy" tools can't find it.

3. Following point 1 and 2, no maintainer familiar with the codebase has ever seen leaky behavior. Some of us run a build of Ghostty, working full time in a terminal, for weeks, and memory is stable.

4. Our Discord has ~30K users, and within it, we only have one active user who periodically gets a large memory issue. They haven't been able to narrow this down to any specific reproduction and they aren't familiar enough with the codebase to debug it themselves, unfortunately. They're trying!

To be clear, I 100% believe that there is some kind of leak affecting some specific configuration of users. That's why the discussion is open and we're soliciting input. I even spent about an hour today on the latest feedback (posted earlier today) trying to use that information to narrow it down. No dice, yet.

If anyone has more info, we'd love to find this. :)


This illustrates the difficulty of maintaining a separation between bugs and discussions:

> To be clear, I 100% believe that there is some kind of leak affecting some specific configuration of users

In this case it seems you believe a bug exists, but it isn't sufficiently well-understood and actionable to graduate to the bug tracker.

But the threshold of well-understood and actionable is fuzzy and subjective. Most bugs, in my experience, start with some amount of investigative work, and are actionable in the sense that some concrete steps would further the investigation, but full understanding is not achieved until very late in the game, around the time I am prototyping a fix.

Similarly the line between bug and feature request is often unclear. If the product breaks in specific configuration X, is it a bug, or a request to add support for configuration X?

I find it easier to have a single place for issue discussion at all stages of understanding or actionability, so that we don't have to worry about distinctions like this that feel a bit arbitrary.


Is the distinction arbitrary? It sounded like issues are used for clear, completable jobs for the maintainers. A mysterious bug is not that. The other work you describe is clearly happening, so I'm not seeing a problem with this approach other than its novelty for users. But to me it looks both clearer than the usual "issue soup" on a popular open source project and more effective at using maintainer time, so next time I open-source something I'd be inclined to try it.

Some people see "bug tracker" and think "a vetted report of a problem that needs fixing", others see "bug tracker" and think "a task/todo list of stuff ready for an engineer to work on"

Both are valid, and it makes sense to be clear about what the teams view is


Agreed. Honestly, I think of those as two very different needs that should have very different systems. To me a bug tracker is about collecting user reports of problems and finding commonalities. But most work should be driven by other information.

I think the confusion of bug tracking with work tracking comes out of the bad old days where we didn't write tests and we shipped large globs of changes all at once. In that world, people spent months putting bugs in, so it makes sense they'd need a database to track them all after the release. Bugs were the majority of the work.

But I think a team with good practices that ships early and often can spend a lot more time on adding value. In which case, jamming everything into a jumped-up bug tracker is the wrong approach.


I think these are valid concerns for a project maintainer to think through for managing a chosen solution but I don't think there is a single correct solution. The "correct", or likely least bad, solution depends on the specific project and tools available.

For bug reports, always using issues for everything also requires you to evaluate how long an issue should exist before it is closed out if it can't be reproduced(if trying to keep a clean issue list). That could lead to discussion fragmentation if now new reports start coming in that need to be reported, but not just anyone can manage issue states, so a new one is created.

From a practical standpoint, they have 40 pages of open discussion in the project and 6 pages of open issues, so I get where they're coming from. The GH issue tracker is less than stellar.


I have a one bit that might be useful that I learned from debugging/optimizing Emacs.

macOS' Instruments tool only checks for leaks when it can track allocations and it is limited to ~256 stack depth. For recursive calls or very deep stacks (Emacs) some allocations aren't tracked and only after setting malloc history flags [0] I started seeing some results (and leaks).

Another place I'm investigating (for Emacs) is that AppKit lifecycle doesn't actually align with Emacs lifecycle and so leaks are happening on the AppKit and that has ZERO to do with application. Seems that problem manifests mostly on a high end specs (multiple HiDPI displays with high variable refresh rate, powerful chip etc.)

Probably nothing you haven't investigated yet, but it is similar to the ghost (pun intended) I've been looking for.

[0]: https://developer.apple.com/library/archive/documentation/Pe...


I’ve been a very happy user for 2025, with some edge cases around the terminal not working on remote shells. I haven’t seen any memory leaks, but wanted to say I appreciate this detailed response.

In my experience, the remote shell weirdness is usually because the remote shell doesn’t recognise ghostty’s TERM=xterm-ghostty value. Fixed by either copying over a terminfo with it in, or setting TERM=xterm-256color before ssh’ing: https://ghostty.org/docs/help/terminfo

The terminfo database is one of those thankless xkcd dependencies. In this case, it's been thanklessly maintained since forever by Thomas Dickey.

https://xkcd.com/2347/


I spotted Ghostty using 20GB+ memory a few days ago on MacOS (according to Activity Monitor). I went through all my tmux sessions, killed everything, it was still 20GB+ so I re-started Ghostty. If I see it happen again, I'll take some notes.

Complete speculation, but does tmux use the xterm alternative screen buffer? I can see a small bug in that causing huge memory leaks, but not showing up in testing.

On some level, that's impressive. Any idea of how long Ghostty was alive? Maybe this a new feature where Ghostty stores LLM model parameters in the terminal scrollback history? /s

Not which parts of this are sarcastic or not, but it was probably running for a few weeks. High variance on that estimate though. I was running 5+ Claude Code instances and a similar number of vim instances.

Is it possible for Ghostty to figure out how much memory its child processes (or tabs) are using? If so maybe it would help to surface this number on or near the tab itself, similar to how Chrome started doing this if you hover over a tab. It seems like many of these stem from people misinterpreting the memory number in Activity Monitor, and maybe having memory numbers on the tabs would help avoid that.

Regarding point 4: why the user should be familiar with the codebase to investigate it? Shouldn't they create a memory dump and send it to dev team?

They don't have to be, but without a reproduction for maintainers, its up to the end users to provide enough information for us to track it down, and this user hasn't been able to yet.

The point is to reduce reported issues from non maintainers as close to 0 as possible. This does that.

I also see ghosty consume a massive amount of memory and periodically need to restart.

You might want to ask your user who can reproduce it to try heaptrack. It tracks allocations, whether they leak or not. If that doesn't find anything, check the few other ways that a program can require memory, such as mmap() calls and whatever else the platform documentation tells you.

Memory usage is not really difficult to debug usually, tbh.


Valgrind won’t show you leaks where you (or a GC) still holds a reference. This could mean you’re holding on to large chunks of memory that are still referenced in a closure or something. I don’t know what language or anything about your project, but if you’re using a GC language, make sure you disable GC when running with valgrind (a common mistake). You’ll see a ton of false positives that the GC would normally clean up for you, but some of those won’t be false positives.

Ghostty is written in Zig.

It will, but they will be abbreviated (only total amount shown, not the individual stack traces) unless you ask to show them in full.

Snide and condescending (or at best: dismissive) comments like this help no one and can at the extremes stereotype an entire group in a bad light.

I think the more constructive reality is discussing why techniques that are common in some industries such as gaming or embedded systems have had difficulty being adopted more broadly, and celebrating that this idea which is good in many contexts is now spreading more broadly! Or, sharing some others that other industries might be missing out on (and again, asking critically why they aren't present).

Ideas in general require marketing to spread, that's literally what marketing is in the positive (in the negative its all sorts of slime!). If a coding standard used by a company is the marketing this idea needs to live and grow, then hell yeah, "tiger style" it is! Such is humanity.


> had difficulty being adopted more broadly

Most applications don’t need to bother the user with things like how much memory they think will be needed upfront. They just allocate how much and when necessary. Most applications today are probably servers that change all the time. You would not know upfront how much memory you’d need as that would keep changing on every release! Static allocation may work in a few domains but it certainly doesn’t work in most.


It's best to think of it as an entire spectrum from "statically allocate everything with compile time parameters" to "directly call the system allocator for every new bit of memory". It's just a helpful way to separate the concerns of memory allocation from memory usage.

What this article is talking about isn't all the way at the other end (compile time allocation), but has the additional freedom that you can decide allocation size based on runtime parameters. That frees the rest of the application from needing to worry about managing memory allocations.

We can imagine taking another step and only allocating at the start of a connection/request, so the rest of the server code doesn't need to deal with managing memory everywhere. This is more popularly known as region allocation. If you've ever worked with Apache or Nginx, this is what they do ("pools").

So on and so forth down into the leaf functions of your application. Your allocator is already doing this internally to help you out, but it doesn't have any knowledge of what your code looks like to optimize its patterns. Your application's performance (and maintainability) will usually benefit from doing it yourself, as much as you reasonably can.


you don’t know up front how much memory you’ll need, but you know up front how much memory you have.

That's just saying "we push our memory problems up the stack so our clients / users need to deal with that". The reason this works is because human users in particular have become accustomed to software being buggy and failing often.

What ?? It’s exactly the opposite of that! Memory allocation on demand frees users from having to worry about configuring memory settings, which static allocation requires except if you overallocate, which is problematic if lots of applications start doing. I absolutely don’t like the argument that memory is nearly free! Most laptops still come with around 8GB of RAM which a browser by itself can consume already … there’s really not a lot left when you also got Docker, compilers, music app, email and so on running. I have 64GB and still have to close apps sometimes because software nowadays does stupid things like overallocating. Don’t do that.

Because garbage-collected languages are easier to teach and to use. So the low-level, low-resource or high-performance stuff is left to a handful of specialists - or "insects" according to Heinlein. Speaking of old things, this reminds me of one of Asimov's short stories, where someone who rediscovers mental calculus is believed to be a genius.

https://ia800806.us.archive.org/20/items/TheFeelingOfPower/T...

Not even calculus, just basic arithmetic operations.


Marketing is the thing that makes uninformed people adopt thing they don't need.

I dont think we need marketing, but rather education, which is the actually useful way to spread information.

If you think marketing is the way knowledge spreads, you'll end up with millions of dollars in your pocket and the belief that you have money because you're doing good, while the truth is that you have millions because you exploited others.


You complain about the very thing that lead to the experimentation and writing of this article, which is how one gets a real education:

"One of those techniques is static memory allocation during initialization. The idea here is that all memory is requested and allocated from the OS at startup, and held until termination. I first heard about this while learning about TigerBeetle, and they reference it explicitly in their development style guide dubbed "TigerStyle"."

Anyways, TigerStyle is inspired by NASA's Power of Ten whitepaper on Rules For Developing Safety Critical Code:

https://github.com/tigerbeetle/tigerbeetle/blob/ac75926f8868...

You might be impressed by that fact or the original Power of Ten paper but if so, it's only because NASA's marketing taught you to be.


If you think that publishing a paper is marketing, then we have quite different views.

Incidentally, I was aware of NASA paper before tigerbeetle was a thing. Not because someone marketed their work, but because I did my research over published ones.


marketing is how ideas spread. And ideas that spread are those that win.

That's why AI-sloppy software would go viral and make loads of money while properly engineered ones die off.

When people need knowledge, they know where to find it. They don't need marketing for that.


I think that's a very narrow view of our society dynamics.

Lord moral tuning fork, listen to this, the more marketing you throw at people the more you decrease SNR, until you hear nothing but "welcome to cosco we love you" advertisements and you live in the reality of the same movie this phrase is from.

It was a common practice in 8 and 16 bit home computing.

For a good example of this sort of pattern in the real world, take a look at the Zig compiler source code. I'm sure others might do it but Zig definitely does. I have a now very outdated series on some of the Zig internals: https://mitchellh.com/zig/parser And Andrew's old DoD talk is very good and relevant to this: https://vimeo.com/649009599

More generally, I believe its fair to call this a form of handle-based designs: https://en.wikipedia.org/wiki/Handle_(computing) Which are EXTREMELY useful for a variety of reasons and imo woefully underused above the lowest system level.


My hypothesis is that handles are underused because programming languages make it very easy to dereference a pointer (you just need the pointer) whereas "dereferencing" a handle requires also having the lookup table in hand at the same time, and that little bit of extra friction is too much for most people. It's not that pointers don't require extra machinery to be dereferenced, it's just that that machinery (virtual memory) is managed by the operating system, and so it's invisible in the language.

My current research is about how to make handles just as convenient to use as pointers are, via a form of context: like a souped-up version of context in Odin or Jai if one is familiar with those, or like a souped-up version of coeffects if one has a more academic background.


I think that it's a generic programming problem: pointers are easier because the type of the pointee is easy to get (a deref) and also its location (memory) but with index-based handles into containers you can no longer say that given a handle `H` (type H = u32) I can use it to get a type `T` and not only that, you've also introduced the notion of "where", that even if for each type `T` there exists a unique handle type `H` you don't know into which container instance does that handle belong. What you need is a unique handle type per container instance. So "Handle of Pool<T>" != "Handle of Pool<T>" unless the Pool is bound to the same variable.

As far as I know no language allows expressing that kind of thing.


I think actually Scala does exactly this style of inferring the container instance from its type: https://docs.scala-lang.org/scala3/book/ca-context-parameter...

But from what I understand (being a nonexpert on Scala), this scheme actually causes a lot of problems. I think I've even heard that it adds more undecidability to the type system? So I'm exploring ways of managing context that don't depend on inferring backward from the type.


> What you need is a unique handle type per container instance.

You can do this with path-dependent types in Scala, or more verbosely with modules in OCaml. The hard part is keeping the container name in scope wherever these handle types are used: many type definitions will need to reference the container handle types. I'm currently trying to structure code this way in my pet compiler written in OCaml.


Great summary and I think your argument is sound.


Non-AMD, but Metal actually has a [relatively] excellent debugger and general dev tooling. It's why I prefer to do all my GPU work Metal-first and then adapt/port to other systems after that: https://developer.apple.com/documentation/Xcode/Metal-debugg...

I'm not like a AAA game developer or anything so I don't know how it holds up in intense 3D environments, but for my use cases it's been absolutely amazing. To the point where I recommend people who are dabbling in GPU work grab a Mac (Apple Silicon often required) since it's such a better learning and experimentation environment.

I'm sure it's linked somewhere there but in addition to traditionally debugging, you can actually emit formatted log strings from your shaders and they show up interleaved with your app logs. Absolutely bonkers.

The app I develop is GPU-powered on both Metal and OpenGL systems and I haven't been able to find anything that comes near the quality of Metal's tooling in the OpenGL world. A lot of stuff people claim is equivalent but for someone who has actively used both, I strongly feel it doesn't hold a candle to what Apple has done.


My initiation into shaders was porting some graphics code from OpenGL on Windows to PS5 and Xbox, and (for your NDA and devkit fees) they give you some very nice debuggers on both platforms.

But yes, when you're stumbling around a black screen, tooling is everything. Porting bits of shader code between syntaxes is the easy bit.

Can you get better tooling on Windows if you stick to DirectX rather than OpenGL?


> Can you get better tooling on Windows if you stick to DirectX rather than OpenGL?

My app doesn't currently support Windows. My plane was to use the full DirectX suite when I get there and go straight to D3D and friends. I lack experience at all on Windows so I'd love if someone who knows both macOS and Windows could compare GPU debugging!


Windows has PIX for Windows, PIX is the name of the GPU debugging since Xbox 360. The Windows version is similar but it relies on debug layers that need to be GPU specific which is usually handled automatically. Although because of that it’s not as deep as the console version but it lets you get by. Most people use RenderDoc on supported platforms though (Linux and Windows). It supports most APIs you can find on these platforms.


Pix predates the XBox.


Yes, Pix.

https://devblogs.microsoft.com/pix/

This is yet another problem with Khronos APIs, expecting each vendor comes up with a debugger, some do, some don't.

At least nowadays there is RenderDoc.

On the Web after a decade, it is still pixel debugging, or trying to reproduce the bug on a native APIs, because why bother with such devtools.


On web there is SpectorJS https://spector.babylonjs.com/

Which offers the basics, but at least works across devices, you can also trigger the traces from code and save the output, then load in the extension. Very useful for debugging mobile.

You can just about run chrome through Nvidias Nsight (of course you're not debugging webgl, but the what ever its translated to on the platform), although I recently tired again and it seems to fail...

these where the command line args i got nsight to pass chrome to make it work

" --disable-gpu-sandbox --disable-gpu-watchdog --enable-dawn-features=emit_hlsl_debug_symbols,disable_symbol_renaming --no-sandbox --disable-direct-composition --use-angle=vulkan <URL> "

but yeah really really wish the tooling was better, especially on performance tracing, currently it's just disable and enable things and guess...


SpectorJS is kind of abandoned nowadays, it hardly has changed and doesn't support WebGPU.

Running the whole browser rendering stack is a masochist exercise, I rather re-code the algorithm in native code, or go back into pixel debugging.

I would vouch the state of bad tooling, and how browsers blacklist users systems, is a big reason studios rather try out streaming instead of rendering on the browser.


yeah... I tired to extend Spectors ui, the code base is "interesting" for simple changes seemed way harder than it should have been. Shame though as its really the only tool like it for web.

My favourite though is safari, graphics driver crashes all the time, the dev tools normally crash as well, so you have zero idea what is happening.

And I've found when the graphics crash the whole browsers graphic state become unreliable until you force close safari and reopen.


It's a full featured and beautifully designed experience, and when it works it's amazing. However it regularly freezes of hangs for me, and I've lost count of the number of times I've had to 'force quit' Xcode or it's just outright crashed. Also, for anything non-trivial it often refuses to profile and I have to try to write a minimal repro to get it to capture anything.

I am writing compute shaders though, where one command buffer can run for seconds repeatedly processing over a 1GB buffer, and it seems the tools are heavily geared towards graphics work where the workload per frame is much lighter. (Will all the AI focus, hopefully they'll start addressing this use-case more).


> However it regularly freezes of hangs for me, and I've lost count of the number of times I've had to 'force quit' Xcode or it's just outright crashed.

This has been my experience too. It isn't often enough to diminish its value for me since I have basically no comparable options on other platforms, but it definitely has some sharp (crashy!) edges.


I didn't even notice who I was replying to at first - so let me start by saying thank you for Ghostty. I spend a great deal of my day in it, and it's a beautifully put together piece of software. I appreciate the work you do and admire your attitude to software and life in general. Enjoy your windfall, ignore the haters, and my best wishes to you and your family with the upcoming addition.

The project I'm mostly working on uses the wgpu crate, https://github.com/gfx-rs/wgpu, which may be of interest if writing cross-platform GPU code. (Though obviously if using Rust, not Zig). With it my project easily runs on Windows (via DX12), Linux (via Vulkan), macOS (via Metal), and directly on the web via Wasm/WebGPU. It is a "lowest common denominator", but good enough for most use-cases.

That said, ever with simple shaders I had to implement some workarounds for Xcode issues (e.g. https://github.com/gfx-rs/wgpu/issues/8111). But still vastly preferable to other debugging approaches and has been indispensable in tracking down a few bugs.


Yeah, Xcode's Metal debugger is fantastic, and Metal itself is imo a really nice API :]. For whatever reason it clicked much better for me compared to OpenGL.

Have you tried RenderDoc for the OpenGL side? Afaik that's the equivalent of Xcode's debugger for Vulkan/OpenGL.


> To the point where I recommend people who are dabbling in GPU work grab a Mac (Apple Silicon often required) since it's such a better learning and experimentation environment.

I don't know, buying a ridiculously overpriced computer with the least relevant OS on it just to debug graphics code written in an API not usable anywhere else doesn't seem like a good idea to me.

For anyone who seriously does want to get into this stuff, just go with Windows (or Linux if you're tired of what Microsoft is turning Windows into, you can still write Win32 applications and just use VK for your rendering, or even DX12 but have it be translated, but then you have to debug VK code while using DX12), learn DX12 or Vulkan, use RenderDoc to help you out. It's not nearly as difficult as people make it seem.

If you've got time you can learn OpenGL (4.6) with DSA to get a bit of perspective why people might feel the lower-level APIs are tedious, but if you just want to get into graphics programming just learn DX12/VK and move on. It's a lower-level endeavor and that'll help you out in the long run anyway since you've got more control, better validation, and the drivers have less of a say in how things happen (trust me, you don't want the driver vendors to decide how things happen, especially Intel).

P.S.: I like Metal as an API; I think it's the closest any modern API got to OpenGL while still being acceptable in other ways (I think it has pretty meh API validation, though). The problem is really that they never exported the API so it's useless on the actual relevant platforms for games and real interactive graphics experiences.


Is your code easy to transfer to other environments? The Apple vendor lock-in is not a great place for development if the end product runs on servers, unlike using AMD Gpus which can be found on the backend. Same goes for games because most gamers either have an AMD or an Nvidia graphics card as playing on Mac is still rare, so priority should be supporting those platforms

Its probably awesome to use Metal and everything but the vendor lock-in sounds like an issue.


It has been easy. All modern GPU APIs are basically the same now unless you're relying on the most cutting edge features. I've found that converting between MSL, OpenGL (4.3+), and WebGPU to be trivial. Also, LLMs are pretty good at it on first pass.


Thats pretty cool then!


Same, Metal is a clean and modern API.

Is anyone here doing Metal compute shaders on iPad? Any tips?


> (I think as they were gearing up to be a more attractive target for an exit).

A common conspiracy theory, but not true.


Source: the guy the company was named after


Where did you read that?


Then why move away from open source?


Yeah how would you know?

j/k Love ghostty!


> while that shown in blue is the stapled notarisation ticket (optional)

This is correct, but practically speaking non-notarized apps are pretty terrible to use for a user enough so that this isn't optional and you're going to pay your $99/yr Apple tax.

(This only applies to distributed software, if you are only building and running apps for your own personal use, its not bad because macOS lets you do that without the scary warnings)

For users who aren't aware of notarization, your app looks straight up broken. See screenshots in the Apple support site here: https://support.apple.com/en-us/102445

For users who are aware, you used to be able to right click and "run" apps and nowadays you need to actually go all the way into system settings to allow it: https://developer.apple.com/news/?id=saqachfa

I'm generally a fan of what Apple does for security but I think notarization specifically for apps outside the App Store has been a net negative for all parties involved. I'd love to hear a refutation to that because I've tried to find concrete evidence that notarization has helped prevent real issues and haven't been able to yet.


I thought the macOS notarization process was annoying until we started shipping Windows releases.

It’s basically pay to play to get in the good graces of Windows Defender.

I think all-in it was over $1k upfront to get the various certs. The cert company has to do a pretty invasive verification process for both you and your company.

Then — you are required to use a hardware token to sign the releases. This effectively means we have one team member who can publish a release currently.

The cert company can lock your key as well for arbitrary reasons which prevents you from being able to make a release! Scary if the release you’re putting out is a security patch.

I’ll take the macOS ecosystem any day of the week.


The situation on Windows got remarkably better and cheaper recently-ish with the addition of Azure code signing. Instead of hundreds or thousands for a cert it’s $10/month, if you meet the requirements (I think the business must have existed for some number of years first, and some other things).

If you go this route I highly recommend this article, because navigating through Azure to actually set it up is like getting through a maze. https://melatonin.dev/blog/code-signing-on-windows-with-azur...


Thanks for the link, I see only available to basically US, Canada and EU though.


That's not easier and cheaper than before. That's how it's always been only now you can buy the cert through Azure.

For an individual the Apple code signing process is a lot easier and more accessible since I couldn't buy a code signing certificate for Windows without being registered as a business.


> That's how it's always been only now you can buy the cert through Azure.

Where can you get an EV cert for $120/year? Last time I checked, all the places were more expensive and then you also had to deal with a hardware token.

Lest we talk past each other: it's true that it used to be sufficient to buy a non-EV cert for around the same money, where it didn't require a hardware token, and that was good enough... but they changed the rules in 2023.


> it’s $10/month

So $120 a year but no it's only Apple with a "tAx"


Millions of Windows power users are accustomed to bypassing SmartScreen.

A macOS app distributed without a trusted signature will reach a far smaller audience, even of the proportionately smaller macOS user base, and that's largely due to deliberate design decisions by Apple in recent releases.


As you said, you need to have a proper legal entity for about 2 years before this becomes an option.

My low-stakes conspiracy theory is that MS is deliberately making this process awful to encourage submission of apps to the Microsoft Store since you only have to pay a one-time $100 fee there for code-signing. The downside is of course that you can only distribute via the MS store.


The EV cert system is truly terrible on Windows. Worst of all, getting an EV cert isn’t even enough to remove the scary warnings popping up for users! For that you still need to convince windows defender that you’re not a bad actor by getting installs on a large number of devices, which of course is a chicken-and-egg problem for software with a small number of users.

At least paying your dues to Apple guarantees a smooth user experience.


No, this information is wrong (unless it’s changed in the last 7 years). EV code signing certs are instantly trusted by Windows Defender.

Source: We tried a non-EV code signing certificate for our product used by only dozens of users at the time, never stopped showing scary warnings. When we got an EV, no more issues.

In case it makes a difference, we use DigiCert.


Not true for us. We EV cert sign (the more expensive one) and my CEO ( the only one left that uses Windows) had this very problem. Apparently the first time a newly signed binary is run it can take up to 15 minutes for defender to allow it. First time I saw this, it was really annoying and confusing.


Interesting.

I regularly download our signed installer often within a minute of it being made available, never noticed a delay.

Maybe it’s very the first time Windows Defender sees a particular org on a cert.

I renewed our cert literally on Friday, tested by making a new build of our installer and could instantly install it fine.

You sure there was no other non Windows default security software on your bosses machine?


They did change it, I think after some debacle with Nvidia pushing an update. They seem to want devs to submit their files via their portal now to get rid of the screen: https://www.microsoft.com/en-us/wdsi/filesubmission


I've never submitted our installers to there (or anywhere). I'm often the very first to install new builds (particularly our nightlies) and never had a delay or anything.


Did you install it on the same machine or a different one?

I was always able to install immediately on the same machine.


Wow. I haven't written software for Windows in over a decade. I always thought Apple was alone in its invasive treatment of developers on their platform. Windows used to be "just post the exe on your web site, and you're good to go." I guess Microsoft has finally managed to aggressively insert themselves into the distribution process there, too. Sad to see.


> Windows used to be "just post the exe on your web site, and you're good to go."

That's also one of the main reasons why Windows was such a malware-ridden hellspace. Microsoft went the Apple route to security and it worked out.

At least Microsoft doesn't require you to dismiss the popup, open the system settings, click the "run anyway" button, and enter a password to run an unsigned executable. Just clicking "more details -> run anyway" still exists on the SmartScreen popup, even if they've hidden it well.

Despite Microsoft's best attempts, macOS still beats Windows when it comes to terribleness for running an executable.


I just wish these companies could solve the malware problem in a way that doesn't always involve inserting themselves as gatekeepers over what the user runs or doesn't run on the user's computer. I don't want any kind of ongoing relationship with my OS vendor once I buy their product, let alone have them decide for me what I can and cannot run.


I get that if you're distributing software to the wider public, you have to make sure these scary alerts don't pop up regardless of platform. But as a savvy user, I think the situation is still better on Windows. As far as I've seen there's still always a (small) link in these popups (I think it's SmartScreen?) to run anyway - no need to dig into settings before even trying to run it.


Are you sure? I had not used Windows for years and assumed "Run Anyway" would work. Last month, I tested running an unsigned (self-signed) .MSIX on a different Windows machine. It's a 9-step process to get through the warnings: https://www.advancedinstaller.com/install-test-certificate-f...

Perhaps .exe is easier, but I wouldn't subject the wider public (or even power users) to that.

So yeah, Azure Trusted Signing or EV certificate is the way to go on Windows.


I solved it by putting a "How to install.rtf" file alongside the program.

Another alternative would be to bundle this app: https://github.com/alienator88/Sentinel

It allows to easily unlock it by drag'n'drop.


What is the subset of users who are going to investigate and read an rtf file but don’t know how to approve an application via system settings (or google to do so)?


I would say quite a lot of users because even the previous simple method of right clicking wasn't that known even by power users. Lot of them just selected "allow applications from anyone" in the settings (most likely just temporarily).

In one application I also offered an alternative by using a web app in case they were not comfortable with any of the option.

Also it's presented in a .dmg file where you have two icons, the app and the "How to install". I would say that's quite inviting for investigation :)


You certainly don't need a hardware token, you can store it in any FIPS 140 Level 2+ stores. This includes stuff like Azure KeyVault and AWS KMS.

Azure Trusted Signing is 100% the best choice, but if for whatever reason you cannot use it, you can still use your own cloud store and hook in the signing tools. I wrote an article on using AWS KMS earlier this year: https://moonbase.sh/articles/signing-windows-binaries-using-...

TLDR: Doing this yourself requires a ~400-500$/year EV cert and miniscule cloud costs


Can confirm this, we use Azure KeyVault and are able to have Azure Pipelines use it to sign our release builds.

We’re (for the moment) a South African entity, so can’t use Azure Trusted Signing, but DigiCert has no issue with us using Azure KeyVault for our EV code signing certificate.

I had ours renewed just this week as it happens. Cost something like USD 840 before tax, don’t have a choice though and in the grand scheme of things it’s not a huge expense for a company.


I have been trying to get people to realize that this is the same or worse for like a year now.

It’s unfortunate it’s come to this but Apple is hardly the worst of the two now.


That's right, there's a similar comparison between the iOS App Store and Android Play Store. Although the annual $99 fee is indeed expensive, the Play Store requires every app to find 12 users for 14 days of internal testing before submission for review, which is utterly incomprehensible, not to mention the constant warnings about inactive accounts potentially being disabled.


In my case, as a developer of a programming language that can compile to all supported platforms from any platform the signing (and notarization) is simply incompatible with the process.

Not only is such signing all about control (the Epic case is a great example of misuse and a reminder that anyone can be blocked by Apple) it is also anti-competitive to other programming languages.

I treat each platform as open only when it allows running unsigned binaries in a reasonable way (or self-signed, though that already has some baggage of needing to maintain the key). When it doesn't I simply don't support such platform.

Some closed platforms (iOS and Android[1]) can be still supported pretty well using PWAs because the apps are fullscreen and self-contained unlike the desktop.

[1] depending on if Google will provide a reasonable way to run self-signed apps, but the trust that it will remain open in the future is already severely damaged


The signing is definitely about control, as is all things with Apple, but there are security benefits. It's a pretty standard flow for dev tools to ad-hoc (self) sign binaries on macOS (either shelling out to codesign, or using a cross-platform tool like https://github.com/indygreg/apple-platform-rs). Nix handles that for me, for example.

It makes it easy for tools like Santa or Little Snitch to identify binaries, and gives the kernel/userspace a common language to chat process identity. You can configure similar for Linux: https://www.redhat.com/en/blog/how-use-linux-kernels-integri...

But Apple's system is centralized. It would be nice if you could add your own root keys! They stay pretty close to standard X.509.


I’m only aware of two times that Apple has revoked certificates for apps distributed outside of the App Store. One was for Facebook’s Research App. The other was for Google’s Screenwise Meter. Both apps were basically spyware for young teens.

In each case, Apple revoked the enterprise certificate for the company, which caused a lot of internal fallout beyond just the offending app, because internal tools were distributed the same way.

Something may have changed, though, because I see Screenwise Meter listed on the App Store for iOS.

https://www.wired.com/story/facebook-research-app-root-certi...

https://www.eff.org/deeplinks/2019/02/google-screenwise-unwi...


The article is about macOS apps, but you're talking about iOS apps.

Apple revokes macOS Developer ID code signing certificates all the time, mostly for malware, but occasionally for goodware, e.g., Charlie Monroe and HP printer drivers.

Also, infamously, Apple revoked the macOS Developer ID cert of Epic Games, as punishment for their iOS App Store dispute.


The problem is not that it’s $99/year. The problem is that it requires strong ID, and if you are doing it as a company (ie if you don’t want Apple to publicize your ID name to everyone who uses your app) then you have to go through an invasive company verification process that you can fail for opaque reasons unrelated to fraud or anything bad.

The system sucks. I’d love to be able to sign my legitimate apps with my legitimate company, but I don’t wish to put the name on my passport onto the screens of millions of people, and my company (around and operating for 20-ish years now) doesn’t pass the Apple verification for some reason.

I also can’t use auto-enroll (DEP) MDM for this reason.


I think the lack of any human to talk to is the worst part of modern tech. Especially for business, where your income may depend on it. It's beyond cruel to prevent people from operating with no explanation of why and no way to find out how to fix it.


Well, what can I say except that the 80s, with their little independent app vendors shipping floppy disks in little baggies, are long behind us. Computers are now commonplace enough, with all the attendant dangers, that platform vendors are demanding a bit of accountability if you want to ship for their platforms, and unfortunately accountability means money and paperwork. The platform vendors are well within their rights to do so. They have a right to protect their reputations, and when malicious or buggy software appears on their platform, their reputation suffers. Half or more of the blue screens on Windows in the late 90s and early 2000s for instance, were due to buggy third-party drivers, yet Microsoft caught the blame for Windows crashing. It took a new driver model, standards on how drivers are expected to behave, and signed drivers to bring this under control.

The future is signed code with deep identity verification for every instruction that runs on a consumer device, from boot loader through to application code. Maybe web site JavaScript will be granted an exception (if it isn't JIT-compiled). This will be a good thing for most consumers. Until Nintendo cleaned out all the garbage and implemented strict controls on who may publish what on their console, the North American video game market was a ruin. The rest of computing is likely to follow suit, for similar reasons.


Congratulations on writing the most servile corporate apologia I've seen all week. This is a masterpiece of Stockholm syndrome.

"Accountability means money and paperwork." Beautiful. Just beautiful. You know what else means money and paperwork? A protection racket. "Nice app you got there, shame if something happened to it before it reached customers. That'll be 30% please." But sure, let's call extortion "accountability" because Tim Apple said so.

Your driver signing example is chef's kiss levels of missing the point. Microsoft said "hey, sign your drivers so we know they're not malware" they didn't say "only drivers we approve can run, and also we get a cut." You're comparing a bouncer checking IDs to a mafia don enforcing territory. These are not the same thing.

And oh my god, the Nintendo argument. You're seriously holding up Nintendo's lockout chip as consumer protection? The same lockout chip they used to squeeze third-party developers, control game production, and maintain an iron grip on pricing? "Until Nintendo cleaned out the garbage" yeah, they cleaned it out alright, straight into their own pockets. The video game crash was caused by publishers like Atari flooding the market with garbage like E.T., not by independent developers needing more "accountability."

"The future is signed code with deep identity verification for every instruction." Holy hell. You're not describing a security feature, you're describing a prison. You're literally fantasising about a world where every line of code needs corporate permission to execute. That's techno feudalism with RGB lighting.

This isn't about protecting anyone from bugs. It's about trillion-dollar companies convincing people like you that you need their permission to use the computer you bought. And somehow, SOMEHOW, you've decided this is good actually, and the 1980s with its freedom and innovation was the problem.

The fact that you think general-purpose computing is a "danger" that needs to be locked down says everything about how effectively these corporations have trained you to beg for your own chains.


> "The future is signed code with deep identity verification for every instruction." Holy hell. You're not describing a security feature, you're describing a prison. You're literally fantasising about a world where every line of code needs corporate permission to execute. That's techno feudalism with RGB lighting.

Yeah. It's gonna suck for us but the consumer market will eat it up. An Xbox that runs Excel. It's not a fantasy. What do you think the Windows 11 hardware requirements were all about? It's Microsoft's way of getting people to get rid of their old PCs without the necessary security hardware, so that when Windows 12 comes out the PC will be a fully locked down platform.

Again, consumers ate up the NES. They ate up the iPhone. This happened partially because of, not in spite of, the iron grip the vendor had over the platform, because they came with a guarantee (a golden seal even, in Nintendo's case!) that no bad stuff would slip through. It filtered out a lot of good stuff, too, but the market has shown that's a price it's willing to pay for some measure of assurance that the bad stuff will be stopped at the source. It's a business strategy that works in the broader market, even though it harms techies. Techies are a tiny, tiny minority, and it's time they learned their place in the grand scheme of things.


At least you can use your ID. If you want to get a code signing certificate for Microsoft at least in Switzerland all the CAs I tried using required me to be incorporated. I'm not sure how it is now but at least a few years ago I couldn't get a code signing certificate as an individual.


Maybe half of the 3rd party apps I have on my applications folder right now are not notarized. It’s really not that big of a deal.


It’s a friction point for potential customers, so we do it with our Electron based app,

The USD 99 annual fee is almost inconsequential, the painful part was getting a DUNS number (we’re a South African entity) and then getting it to work in a completely automated manner on our build server.

Fortunately, once set up it’s been almost no work since.


It is a big deal. You can no longer just right click apps to run them, you have to take a trip to a subpanel of system settings, after clicking though two different dialogs that are designed to scare you into thinking something is wrong (one mentions malware by name).

For normal users this might as well be impossible.

Remember, your average user needs a shortcut to /Applications inside the .dmg image otherwise they won’t know where to drag the app to to install it.


The stapled ticket is optional beyond notarization itself. If you notarize but don’t staple the ticket, users may need an internet connection to check the notarization status.


Apple’s Mac security team in general kind of sucks at their job. They are ineffectual at stopping real issues and make the flow for most users more annoying for little benefit.


> notarization has been a net negative for all parties involved

Notarization made it significantly harder to cross-compile apps for macOS from linux, which means people have to buy a lot of macOS hardware to run in CI instead of just using their existing linux CI to build mac binaries.

You also need to pay $99/year to notarize.

As such, I believe it's resulted in profit for Apple, so at least one of the parties involved has had some benefit from this setup.

Frankly I think Apple should keep going, developer licenses should cost $99 + 15% of your app's profit each year, and notarization should be a pro feature that requires a macbook pro or a mac pro to unlock.


There are second order effects. You definitely attract different types of talent depending on the technology stack of choice. And building the right group of talent around an early stage product/company is an extremely impactful thing on the product. And blogs are an impactful talent marketing source.

This doesn't guarantee any sort of commercial success because there are so many follow on things that are important (product/market fit, sales, customer success, etc.) but it's pretty rough to succeed in the follow ons when the product itself is shit.

For first order effects, if a product's target market is developer oriented, then marketing to things developers care about such as a programming language will help initial adoption. It can also help the tool get talked about more organically via user blogs, social media, word of mouth, etc.

Basically, yeah, it matters, but as a cog in a big machine like all things.


What does it say about your latest project that it attracts the most toxic types from Germany to China^? Are you even aware? Do you consider this "building the right group of talent" for your project?

^https://xcancel.com/QULuseslignux/status/1918296149724692968


I did a for-profit course registration tool called uwrobot too if you or any of your friends were customers of that...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: