Hacker Newsnew | past | comments | ask | show | jobs | submit | solatic's commentslogin

To each his own, but while I can certainly understand the hesitancy of an architect to pick Zig for a project that is projected to hit 100k+ lines of code, I really think you're missing out. There is a business case to using Zig today.

True in general but in the cloud especially, saving server resources can make a significant impact on the bottom line. There are not nearly enough performance engineers who understand how to take inefficient systems and make improvements to move towards theoretical maximum efficiency. When the system is written in an inefficient language like Python or Node, fundamentally, you have no choice but to start to move the hotpath behind FFI and drop down to a systems language. At that point your choices are basically C, C++, Rust, or Zig. Of the four choices, Zig today is already simplest to learn, with fewer footguns, easier to work with, easier to read and write, and easier to test. And you're not going to write 100k LOC of optimized hotpath code. And when you understand the cost savings involved in reducing your compute needs by sometimes more than 90% by getting the hotpath optimized, you understand that there is very much indeed a business case to learning Zig today.


As a counter argument to this. I was able to replicate the subset of zig that I wanted, using c23. And in the end I have absolute stability unless I break things to “improve”.

Personally, it is a huge pain to rewrite things and update dependencies because the code I am depending on is moving out from under me. I also found this to be a big problem in Rust.

And another huge upside is you have access to best of everything. As an example, I am heavily using fuzz testing and I can very easily use honggfuzz which is the best fuzzer according to all research I could find, and also according to my experience so far.

From this perspective, it doesn’t make sense to use zig over c for professional work. If I am writing a lot of code then I don’t want to rewrite it. If am writing a very small amount of code with no dependencies, then it doesn’t matter what I use and this is the only case where I think zig might make sense.


To add another point to this. W/e people write online isn’t correct all the time. I was thinking zig compiles super fast but found that c with a good build system and well split header/implementation files is basically instant to compile. You can use thin-lto with cache to have instant recompilation for release builds.

Real example: I had to wait some seconds to compile and run benchmarks for a library and it re-compiles instantly (<100ms) with c.

Zig does have a single compilation unit and that might have some advantages but in practice it is a hard disadvantage. And I didn’t ever see someone pointing this out online.

I would really recommend trying to learn c with modernC book and try to do it with c for people like me building something from scratch


Also I was also thinking that breaking doesn’t matter that much, but my opinion changed around 10k lines of code very quickly. At some point I really stopped caring about every piece and wanted to forget about it and move on really

>with fewer footguns, easier to work with, easier to read and write, and easier to test.

With the exception of fewer foot guns, which Rust definitely takes the cake and Zig is up in second, I'd say Zig is in last place in all of these. This really screams that you aren't aware of C/C++ testing/tooling ecosystem.

I say this as a fan of Zig, by the way.


> ...in the cloud especially, saving server resources can make a significant impact on the bottom line. There are not nearly enough performance engineers who understand how to take inefficient systems and make improvements to move towards theoretical maximum efficiency.

That's a very good point, actually. However...

> with fewer footguns

..the Crab People[0] would definitely quibble with that particular claim of yours.

[0] https://en.wikipedia.org/wiki/Crab_People of course.


I would quibble with all of the claims, other than easier to learn.

I really see no advantage for Zig over Rust after you get past that 2 first two weeks.


Coming from Go, I'm really disappointed in Rust compiler times. I realize they're comparable to C++, and you can structure your crates to minimize compile times, but I don't care. I want instant compilation.

Zig is trying to get me instant compilation and I see that as a huge advantage for Zig (even past the first 2 weeks).

I'll probably stick with Rust as my "low level language" due to its safety, type system, maturity, library ecosystem, and career opportunities.

But I remain jealous of Zig's willingness to do extreme things to make compilation faster.


On any Go production projects I worked on or near, the incremental compile time was slower than C++ and Rust.

A full build was definitely much faster, but not as useful. Especially when using a build system with shared networked caching (Bazel for example).

Yes those projects were a bloated mess, as it always seems to be.


Re: slower incremental compile times - not my experience, but interesting data point. I'll keep a look out for this.

The key with c++ is to keep coding while compiling. Otherwise..yeah you're blocked.

The key with C++ is to learn how to use the build system, make use of binary libraries, and if one can afford to use the very latest compiler versions, modules.

And avoid header libraries, C++ isn't a scripting language.


Eh, I'd say that Rust has a different set of footguns. You're correct that you won't run into use-after-free footguns, but Rust doesn't protect you from memory leaks, unsafe code is still unsafe, and the borrow checker and Rust's language complexity are their own kind of footguns.

But I digress. I was thinking of Zig in comparison to C when I wrote that. I don't have a problem conceding that point, but I still believe the overall argument is correct to point to Zig specifically in the case of writing code to optimize a hotpath behind FFI; it is much easier to get to more optimal code and cross-compilation is easier to boot (i.e. to support Darwin/AppleSilicon for dev laptops, and both Linux/x64 and Linux/arm64 for cloud servers).


> but Rust doesn't protect you from memory leaks

In theory no. In practice it really does.

> unsafe code is still unsafe

Ok, but most rust code is not unsafe while all zig code is unsafe.

> and the borrow checker and Rust's language complexity are their own kind of footguns

Please elaborate. They are something to learn but I don’t see the footgun. A footgun is a surprisingly defect that’s pointed at your foot and easy to trigger (ie doing something wrong and your foot blows off). I can’t think how the borrow checker causes that when it’s the exact opposite - you can’t ever create a footgun without doing unsafe because it won’t even compile.

> but I still believe the overall argument is correct to point to Zig specifically in the case of writing code to optimize a hotpath behind FFI; it is much easier to get to more optimal code and cross-compilation is easier to boot (i.e. to support Darwin/AppleSilicon for dev laptops, and both Linux/x64 and Linux/arm64 for cloud servers).

I agree cross compilation with zig is significantly easier but Rust isn’t that hard, especially with the cross-rs crate making it significantly simpler. Performance, Rust is going to be better - zig makes you choose between safety and performance and even in unsafe mode there’s various things that cause better codegen. For example zig follows the C path of manual noalias annotations which has been proven to be non scalable and difficult to make operational. Rust does this for all variables automatically because it’s not allowed in the language.


> a footgun is a surprising defect that's pointed at your foot and easy to trigger

Close, but not the way I think of a footgun. A footgun is code that was written in a naive way, looks correct, submitted, and you find out after submitting it that it was erroneous. Good design makes it easy for people to do the right thing and difficult to do the wrong thing.

In Rust it is extremely easy to hit the borrow checker including for code which is otherwise safe and which you know is safe. You walk on eggshells around the borrow checker hoping that it won't fire and shoot you in the foot and force you to rewrite. It is not a runtime footgun, it is a devtime footgun.

Which, to be fair, is sometimes desired. When you have a 1m+ LOC codebase and dozens of junior engineers working on it and requirements for memory safety and low latency requirements. Fair enough trade-off in that case.

But in Zig, you can just call defer on a deinit function. Complexity is the eternal enemy, and this is just a much simpler approach. The price of that simplicity is that you need to behave like an adult, which if the codebase (hotpath optimization) is <1k LOC I think is eminently reasonable.


> A footgun is code that was written in a naive way, looks correct, submitted, and you find out after submitting it that it was erroneous.

You’re contradicting yourself a bit here I think. Erroneous code generally won’t compile whereas in Zig it will happily do so. Also, Zig has plenty of foot guns (eg forgetting to call defer on a deinit but even misusing noalias or having an out of bounds result in memory corruption). IMHO the zig footgun story with respect to UB behavior is largely unchanged relative to C/C++. It’s mildly better but it’s closer to C/C++ than being a safe language and UB is a huge ass footgun in any moderate complexity codebase.


> IMHO the zig footgun story with respect to UB behavior is largely unchanged relative to C/C++

The only major UB from C that zig doesn’t address is use after free afaik. How is that largely unchanged???

Just having an actual strong type system w/o the “billion dollar mistake” is a large change.


Depends how you compile it. If you’re compiling ReleaseFast/ReleaseSmall, it’s not very different from C (modulo as you said it has some language features to make it less likely you do it):

* Double free

* Out of bounds array access

* Dereferencing null pointers

* Misaligned pointer dereference

* Accessing uninitialized memory

* Signed integer overflow

* Accessing a union field for which the active tag is something else.


wow, what a list! all of these are statically analyzable using a slightly hacked zig compiler and a library!

https://github.com/ityonemo/clr

(Btw: you can't null pointer dereference in zig without using the navigation operator which will panic on null; you can't misalign a pointer unless you use @alignCast which will also create a panic)


I can also analyse C and C++ code for such issues, while keeping using a mature languages ecosystem.

If you can statically analyze c for memory safety, why did pazlo bother building fil-C?

Where did I wrote that static analysis was enough on its own?

Can you phrase that as a direct answer to my question? Trying to learn something here. Appreciate it!

I the sentence "I can also analyse C and C++ code for such issues, while keeping using a mature languages ecosystem." it is implied there are many tools that perform analysis of C and C++ code.

Some of those tools are static, others are dynamic, some require a special build, others are hybrid, others exist on all modern IDEs.

So it can be a mix of lint, clang tidy, VS analysis, Clion, ASan, USBsan, hardned runtimes, contracts (Frama-C), PVS, PurifyPlus, Insure++,....


Neat. Why isn’t this in the main compiler / will it be? I’m happy to retract my statement if this becomes actually how zig compiles but it’s not a serious thing as it’s more a PoC of what’s possible today and may break later

It will never be in the main compiler, since it was written by Claude. I think that's ok. The general concept is sound and won't break (modulo names of instructions changing etc). In fact it will get better. With the new io, concurrency checks will be possible

But also, there is no reason why it should have to be in the main compiler. I've architected it as a dlload plugin. It's even crazier! The output is a zig program which you must compile and run to get the final result.


This is pretty close to saying Rust is not very different than C because it has the unsafe keyword. That is, either an ignorant (of Zig) or disingenuous statement.

To me the zig position is akin to saying that because Asan, TSAn and ubsan exist, c++ is safe because you’re just running optimized for performance.

If you believe I mischaracterized zig, please enlighten me what I got wrong specifically rather than attacking my ad hominem


It is worse, because Asan, TSAn and ubsan already have several years of production experience, in a mature ecosystem.

There is no point throwing it all away to get back to the starting line.


I’m not going to write a detailed response to something that’s extremely close to what an LLM responds to “what UB does zig have?”

Arguing about whether certain static analysis should be opt in or opt out is just extremely uninteresting. It’s not like folks are auditing the unsafe blocks in their dependencies anyways.

If you want to talk about actual type system issues that’s more interesting.


So the Fermat defense? “I have the proof but the margin is too small”.

The proof is in the pudding. TigerBeetle despite having a quite opinionated style still almost hit by UB and basically got lucky it wasn’t a worse failure. By contrast, even though unsafe isn’t audited for all dependencies, it does in practice seem to make UB extremely unlikely. And there’s work ongoing in the ecosystem to create safe abstractions to remove existing unsafe into well tested and centralized things


As an example to this, I was using polars in rust as a dependency in a fairly large project.

It has issues like panicking or segfaulting when using some data types (arrow array types) in the wrong place.

It is extremely difficult to write an arrow implementation in Rust.

It is much easier to do it in zig or c(without strict aliasing).

I also had the same experience with glommio in Rust.

Also the binary that we produce compiles in several minutes and is above 30mb. This is an insane amount of bloat. And unfortunately I don’t think there is another feasible way of doing this kind of work in rust, because it is so hard to write proper low level code.

I don’t agree with noalias being bad personally. I fuond it is the only way to do it. It is much harder to write code with pointers with implicit aliasing like c has by default and rust has as the only option. And you don’t ever need to use noalias except some rare places.

To make it clear, I mean the huge footgun in rust is producing a ton of bloat and subpar code because you can’t write much and you end up depending on too many libraries


> To make it clear, I mean the huge footgun in rust is producing a ton of bloat and subpar code because you can’t write much and you end up depending on too many libraries

Nothing is forcing you to do that other than it’s easy to add dependencies. I don’t see how zig is much different


I find it easier to write all the code I want in zig or c since it is easy to write low level code.

Hashmap is a good example to this. I was able to fairly easily port some parts of hashbrown to c but I’m pretty sure I can’t write that code in Rust in a reasonable amount of time.


Not the GP, but I've noticed that because if you don't anticipate how you might need to mutate or share state in the future, you can have a "footgun" that forces large-scale code changes for relatively small "feature-level" changes, because of the rust strictness. Its not a footgun in the sense that your code does what you don't expect, its a footgun in that your maintenance and ability to change code is not what you expect (and its easy to do). I'm sure if you are really expert with rust, you see it coming and don't use patterns that will cause waves of changes (but if you're expert at any language you avoid the footguns).

That’s not a footgun and happens in any language. I have not observed rust code to be more prone to it. Certainly less so than c++ for various reasons around the build time and code organization.

It's possible to do memory safety analysis for zig. I think you could pretty easily add a noalias checker on top of this:

https://github.com/ityonemo/clr


> Of the four choices, Zig today is already simplest to learn,

Yes, with almost complete lack of documentation and learning materials it is definitely the easiest language to learn.


For reference, here's where Zig's documentation lives:

https://ziglang.org/learn/

I remember when learning Zig, the documentation for the language itself was extensive, complete, and easily greppable due to being all on one page.

The standard library was a lot less intuitive, but I suspect that has more to do with the amount of churn it's still going through.

The build system also needs more extensive documentation in the same way that the stdlib does, but it has a guide that got me reasonably far with what came out of the box.


People do under estimate how nice it is that the language ref or framework/tool documentation is all on one web page i can easily pdf print it and push it to my ipad for reading.

People who have difficulty on dating apps want to find a scapegoat, so they scapegoat the app.

The truth is that dating markets are lemon markets. People who are "dateable" tend to find success quickly, and people who are "not dateable" tend to stay on the market. Hence over time, the market will be dominated by "not dateable" people. No dating app on the planet will magically make you a "dateable" person.

To find success on dating apps, you have to work on yourself first, and only afterwards make sure that work shows through both in your profile and in your texting.

Source: was on the apps, undateable for eight years (depression and low self esteem), went to therapy, after making huge changes to my life and getting to a point where I felt like things were going well in everything but being single, a month later I found my girlfriend (now two years together).


> you have to work on yourself first

I hate this phrase because it's a generic catch-all that says nothing but shuts down any discussion. If I'm friendly, responsible, honest, not poor, do sports, learn new things, keep the house clean, then the fuck more you want. Can we admit that social dynamics have completely changed and the value of "a relationship" dropped through the floor? 200 years ago bad relationship was better than no relationship because have fun trying to farm land on your own, but nowadays it's literally more convenient to live single than to deal with the inconvenience of living with another person.

Also, personally, I'm a minority within a minority, and I'm not going to cheat the statistics even if I shower twenty times a day.


> If I'm friendly, responsible, honest, not poor, do sports, learn new things, keep the house clean, then the fuck more you want.

I know people like this, but they also are unlucky in love because they also have negative attitudes about women and life that they refuse to become more enlightened about.


> If I'm friendly, responsible, honest, not poor, do sports, learn new things, keep the house clean, then the fuck more you want.

I think you are describing a person who has worked on himself. Like doing sports, that's good. I think too many guys continue with their teenage hobbies like playing computer games, and that's generally not attractive to women.

Of course, there are no guarantees. There's no magic checklist that you can fulfill and be guaranteed to find a partner. But I think there's always more you can do to make yourself more attractive.


I'm gay, so I don't care about impressing women. But besides this... I don't understand what's wrong with incorporating teenage hobbies into adult lifestyle. Sure, nobody wants to marry a mental teenager, but if I do have adult self-development hobbies, then I see no problem that next to that I'd also have teenage hobbies. I find it very sad when guys completely discard their personality just to keep wife happy.

It's just that, at current point of my life I think I'm ready for a relationship. My daily life loop is satisfactory for me, the only thing I'm missing is someone to be with.


> friendly, responsible, honest, not poor... keep the house clean

Even assuming I take you at your word, this describes a good roommate, not a good romantic partner.

> do sports, learn new things

Has negligible if any effect on romantic relationships. Both fat and stupid people still find romantic partners (and sometimes end up happy with them nonetheless).

> Then the fuck more you want

Somebody who is fun to be with, who makes me feel good, warm, and fuzzy inside, who at times makes me feel safe and at other times dares me to go farther. Somebody who is willing to go to new depths of vulnerability together, so that I can trust that they see me, the whole me, even the crummy parts, and I can see them, the whole them, even the crummy parts, and be loved and accepted nonetheless.

> The value of "a relationship" has dropped through the floor

This is transactional language. Strong, fulfilling romantic relationships are not transactional. Part of working on yourself is learning how to develop non-transactional relationships without getting hurt / getting exploited in your attempts to do so (i.e. by lemons on the market).

> more convenient to live single than to deal with the inconvenience of living with another person

I highly disagree, assuming that you find the right person to live with, which is the whole challenge. Living with another person who you enjoy living with, economically speaking, means splitting at least rent and electric bills (water bills are more linear with the number of people in the house), sometimes splitting a car payment (if you are a one-car household); when you split rent, you split the rent of the kitchen, the bathroom, the living room, and at least one bedroom, that are all shared. You eat better by cooking for two and sharing. The absolutely most economical arrangement is usually Dual-Income No Kids (DINK).


> Somebody who is fun to be with, who makes me feel good, warm, and fuzzy inside, who at times makes me feel safe and at other times dares me to go farther. Somebody who is willing to go to new depths of vulnerability together, so that I can trust that they see me, the whole me, even the crummy parts, and I can see them, the whole them, even the crummy parts, and be loved and accepted nonetheless.

Cool. If I had stated that I am like this, then someone else would've complained that this is overly romantic view and in reality a relationship is built with someone who can help with boring everyday tasks like doing the laundry or watching the kids. The point is, even if I were Jesus Christ himself, someone would find a flaw that makes me undateable in their opinion.

> This is transactional language.

Because all relationships are transactional. Welcome to adulthood. I don't really have time to argue with someone who still believes in Santa Claus.

> Living with another person who you enjoy living with, economically speaking, means splitting at least rent and electric bills (water bills are more linear with the number of people in the house), sometimes splitting a car payment (if you are a one-car household); when you split rent, you split the rent of the kitchen, the bathroom, the living room, and at least one bedroom, that are all shared. You eat better by cooking for two and sharing.

It's strange to me that you tell me not to be transactional, but then you point to money as an example of an advantage of being in a relationship, not emotional support. Also, there's a huge difference between "without a relationship, I'll literally starve to death" and "without a relationship, I'll go on holiday once a year instead of twice a year".

Something tells me that your view of relationships is incoherent at best.


> The point is, even if I were Jesus Christ himself, someone would find a flaw that makes me undateable in their opinion.

Well yeah, Christ isn't really dateable because he would never be able to be vulnerable with you (after all, if he died for your sins, you can't really repay the favor, can you?). People want to take celebrities to bed, they don't want to date them. It's a different kind of relationship - more shallow.

But more to the point, a flaw is not what makes somebody undateable. We all have flaws. I have flaws. My partner has flaws. Some kinds of flaws make people undateable, others do not.

> Someone who still believes in Santa Claus

I mean, my partner makes me happier than Santa Claus ever did, and I don't have to wait until Christmas for her to pay me a visit, so....

> point to money as an example of an advantage of being in a relationship, not emotional support

Emotional support was literally the first example I gave ("feel good, warm, and fuzzy inside"). I added the economic argument to address your framing. The emotional aspect is the #1 most important reason and I would be in my relationship for that reason alone, even without any economic benefits; the economic benefits are a silver lining and insufficient on their own to justify a relationship. But no, I'm not going to pretend that the silver lining doesn't exist.


[flagged]


[flagged]


No surprise you are alone.

Hehehe

If employees get stock options and decide to exercise on exit, they count against the 500 unaccredited investor limit that would trigger reporting requirements. So companies that issue stock options do have an outside risk that enough employees will exit, exercise their stock options, and trigger a reporting requirement.

yeah, that's why those companies tend to offer liquidity strategically to lower their employee investor count

if an employee exercise options (but stays at the company) does that still count as one of 1000?

yes, the company has to manage it and hope certain things happen if they want to stay private

> I started dreading the monotony of it all... My days had become predictable: check the dashboards, respond to tickets, debug whatever broke overnight, push some Terraform, go home. Maintain the HashiCorp Vault clusters, manage the secrets pipelines, answer the same support questions. Repeat. The work that used to feel engaging had become routine.

Why are you checking dashboards (pull/polling) instead of building alerting (push), so that you do not need to check dashboards as a matter of routine? If the tickets are dealing with the same problem again and again, why aren't you building a self-service platform to let your users handle these problems by themselves (especially now that LLMs are making this much more trivial to build)?

Author sounds like he had poor technical management who didn't understand DevOps (let alone DevSecOps) and turned it into an operations role.

Everything that the author likes about Solutions Engineering, I get from a DevOps role, from collaborating with other engineers in my company to make them more agile, productive, and take better ownership in production. Too many engineering teams fall into a trap of not being allowed to focus on any non-functional work (gotta ship revenue-generating features!) and LOVE it when someone like me comes along, who doesn't answer to Product, and can help them out on the non-functional side. I get to talk to "customers" as much as I want, in a role where I can just walk up to them and not need to communicate over Zoom or with significant plane travel.

Author should have considered trying to just find a different Platform Engineering role.


> There should be some balanced path in the middle somewhere, but I haven’t stumbled across a formal version of it after all these decades.

Well, there isn't a formal version of it, because the answer is not formal, it is cultural.

In enterprise software, you have an inherent tension between good software engineering culture, where you follow the Boy Scouts' principle of leaving a codebase cleaner than you found it, and SOC2 compliance requirements that expect every software change to be tracked and approved.

If every kind of clean up requires a ticket, that has to be exhaustively filled out, then wait for a prioritization meeting, then wait for the managers and the bean counters to hem and haw while they contemplate whether or not it's worth it to spend man-hours on non-functional work, then if you're lucky then decide you can spend some time on it three weeks from now and if you're unlucky they decide nope, you gotta learn to work within an imperfect system. After once or twice or trying to work By The Book, most engineers with an ounce of self-respect will decide "fuck it, clearly The System doesn't care," and those with two ounces of self-respect will look for work elsewhere.

Or, you get together with the members of your team and decide, you know what, the program managers and the bean counters, they're not reading any of the code, not doing any of the reviews, and they have no idea how any of this works anyway. So you collectively decide to treat technical debt as the internal concern that it anyway was in the first place - you take an extra half hour, an extra day, however long it takes to put in the extra cleaning or polish, and just tack it on to an existing ticket. You give a little wink and you get a little nod and you help the gears turn a little more smoothly, which is all the stakeholders actually care about anyway.

You cannot replace culture with process. All attempts to replace culture with process will fail. People are not interchangeable cogs in the machine. If you try to treat them as such, they will gum up and get stuck. Ownership and autonomy are the grease that allows the human flywheel to spin freely. That means allowing people to say, "I'm going to do this because I think that it is Right And Good For My System Which I Own", and allowing them to be responsible for the consequences. To pass SOC2, that means treating people like adults and allowing them to sometimes say, instead of "can I get this reviewed because I legit need another set of eyes to take a serious look?", to say "can I get a quick rubber-stamp on this please?"


PgBouncer introduces its own problems and strictly speaking adds additional complexity to your infrastructure. It needs to be monitored and scaled separately, not to mention the different modes of session/transaction/statement connection pooling. Adding another proxy in the middle also increases latency.

Yes, currently, putting PgBouncer in the middle helps handle massive sudden influxes of potentially short lived connections. That is indeed the correct current best practice. But I hardly fault the author for wishing that postmaster could be configured to run on multiple cores so that the additional complexity of running PgBouncer for this relatively simple use-case could be eliminated.


AWS has a more powerful abstraction already, where you can condition permissions such that they are only granted when the request comes from a certain VPC or IP address (i.e. VPN exit). Malware thus exfiltrated real credentials, but they'll be worthless.

I'm not prepared to say which abstraction is more powerful but I do think it's pretty funny to stack a non-exfiltratable credential up against AWS given how the IMDS works. IMDS was the motivation for machine-locked tokens for us.

There are two separate concerns here: who the credentials are associated with, and where the credentials are used. IMDS's original security flaw was that it only covered "who" the credentials were issued to (the VM) and not where they were used, but aforementioned IAM conditions now ensure that they are indeed used within the same VPC. If a separate proxy is setup to inject credentials, then while this may cover the "where" concern, care must still be taken on the "who" concern, i.e. to ensure that the proxy does not fall to confused deputy attacks arising from multiple sandboxed agents attempting to use the same proxy.

There are lots of concerns, not just two, but the point of machine-bound Macaroons is to address the IMDS problem.

> Tax rates are not the same as effective taxes paid

Correct, but the tax system is nonetheless quite effective at setting behavioral incentives and disincentives. Higher income and estate tax rates incentivize capital being locked up in investments instead (for lower capital gains taxes); those investments put people to work and are subject to Labor negotiating higher compensation. Allowing donations to non-profits to deduct from other taxes allows private individuals (compared to a government bureaucracy) to more efficiently fund social welfare programs, which incidentally, also put people to work in the administration of such programs.

Funding government is not the sole goal of higher taxation rates, but rather, also how incentives in society are shaped.


Really don't understand.

Their own cherry-picked example benchmarks show that there is frequently no benefit, after enabling FSST, to query runtimes and sometimes query runtimes are actually negatively affected. Despite hard empirical evidence against FSST, they decided to enable it across-the-board? Not even "hey, we built this cool thing which may or may not improve your workloads, here's how to opt-in", but pushing straight up performance regressions for their customers with no way of opting-out?

Why would I ever trust my data to engineers who are more interested in their academically-interesting resume-driven-development than in actual customer outcomes?


> Is it just really really hard to maintain a shared dictionary when constantly adding and deleting values? Is there just no established reference algorithm for it?

Enums? Foreign key to a table with (id bigint generated always as identity, text text) ?

> I have databases I know would be reduced in size by at least half.

Most people don't employ these strategies because storage is cheap and compute time is expensive.


The current prices for SSDs and DRAM may change the strategies.

In a mini-PC that I have assembled recently, with a medium-priced CPU (an Arrow Lake H Core Ultra 5) the cost of storage has been 60% of the total cost of the computer, and it was only 60% because I have decided to buy much less memory than I would have bought last summer (i.e. I bought 32 GB DRAM + 3 TB SSDs, while I wanted double amounts, but then the price would have become unacceptable).

Moreover, I bought the mini-PC and memories in Europe, but the same exact computer model from ASUS, with the same memories, has in USA, on Newegg, a price about 35% higher than in Europe, and a similar ratio is valid for other products, so US customers are likely to be even more affected by this great increase in the cost of storage.


> BIGINT

I’d use a SMALLINT (or in MySQL, a TINYINT UNSIGNED) for a lookup table. The bytes add up in referencing tables.

> Most people don't employ these strategies because storage is cheap and compute time is expensive.

Memory isn’t cheap. If half of your table is low-cardinality strings, you’re severely reducing the rows per page, causing more misses, slowing all queries.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: