My biggest complaint about ChatGPT is how slow their interface is when the conversations get log. This is surprising to me given that it's just rendering chats.
It's not enough to turn me off using it, but I do wish they prioritized improving their interface.
It's not C but we have sponsored a Zig target for Kaitai. If anyone reading this knows Zig well, please comment because would love to get a code review of the generated code!
It would be premature to review now because there are some missing features and stuff that has to be cleaned up.
But I am interested in finding someone experienced in Zig to help the maintainer with a sanity check to make best practices are being followed. (Would be willing to pay for their time.)
If comptime is used, it would be minimal. This is because code-generation is being done anyway so that can be an explicit alternative to comptime. But we have considered using it in a few places to simplify the code-generation.
Until when? Secrets in applications in many cases (I would probably wager majority of the cases) are only useful if they're in plaintext at some point, for example if you're constructing a HTTP client or authenticating to some other remote system.
As far as high-level language constructs go, there were similarish things like SecureString (in .NET) or GuardedString (in Java), although as best as I can tell they're relatively unused mostly because the ergonomics around them make them pretty annoying to use.
If you are breaking something up for "long" and "short" you're optimizing for the wrong thing. You don't care about code being short for its own sake or long for its own sake right?
Ultimately, you're going to revisit this code to make the change after some time passes. Is it easy to follow the code and make the change without making mistakes? Is it easy for someone else on the team to do the same?
Sometimes optimizing for "easy to understand and change" means breaking something apart. Sometimes it means combining things. I've read that John Carmack would frequently inline functions because it was too hard to follow.
So, rather than whether something is big or too small, I would ask whether it would be easy to understand/change when coming back to it after a few months.
Put another way: why not optimize for the actual thing you care about rather than an intermediate metric like LOC?
> If you are breaking something up for "long" and "short" you're optimizing for the wrong thing. You don't care about code being short for its own sake or long for its own sake right?
You're misunderstanding. Code is not broken up because it's "long". It's broken up because it is difficult to comprehend and maintain, and its length is one criterion that might signal that to be the case. Another sign is cyclomatic complexity, which is another arbitrary number left for teams to decide how to use best.
The main topic, and why it is so widely argued, is that readability and maintainability are entirely subjective concepts that are impossible to quantify. This is why we need some specific guidelines that could point us in certain directions.
This doesn't mean that these guidelines should be strictly enforced. I've often decided to silence linters that warn me about long functions or high cyclomatic complexity, if to me the function is readable enough, and breaking it up would be more problematic. This is open to interpretation and debate during code reviews, but it doesn't mean that these are useless signals that developers should ignore altogether.
> and its length is one criterion that might signal that to be the case
You seem to be the one misunderstand it.
It's just not. Function length is not a useful metric, at all. The probability of some problems increase with length, but even then it's not the length that will tell you if your code has a problem or not.
If you have length guidelines, your guidelines are bad.
And, yeah, cyclomatic complexity is almost as useless as function length. If you have warnings bothering people about those, you are reducing your code quality.
And even if you fall under the first category, I find it hard to believe that the performance bottleneck is solved by using Vercel and SSR.
With all the other crazy shit people are doing (multi-megabyte bundle sizes, slow API calls with dozens of round-trips to the DB, etc) doing the basics of profiling, optimizing, simplifying seems like it'd get you much further than changing to a more complex architecture.
It's not enough to turn me off using it, but I do wish they prioritized improving their interface.