We are waiting on the IO abstraction for a supported Zig client (There are others though today), but this is specifically for a new initiative at Synadia that we will share more details about soon!
As someone that recently started trial-running a NATS cluster, this move by Synadia that I just learned of has made sure that I won't continue on with NATS.
> Over 97% of contributions to the NATS.io server were made by employees of Synadia and its predecessor company
Seems like a failed CNCF project. I guess synadia can fork nats and build their own, but the CNCF nats would basically lose all the contributors so what is the point of keeping the trademark?
A good warning for those looking at CNCF projects to audit where the commits are coming from. I passed up using mayastor/OpenEBS, and one of the main reasons was that almost 100% of the commits were from a single for-profit company who could just bail on the project at any time. I ended up going with Rook which has a much healthier spread of commiters.
I think its a vicious cycle - No one supports it so services don't implement it, and because no services implement it people use hacky ways to do what MQTTv5 does out of the box or use services they normally wouldn't because the ones they want only support MQTT 3.1.1.
We use NATS for pretty much all of our MQ / Service Bus type stuff, we were using GCP Iot, AWS IoT and recently moved to EMQX as it has native MQTTv5 support.
NATS would have been a much better choice for us but the support wasn't there.
We have some great work coming that allows lighter weight consumers to scale to that level and beyond. Happy to chat with folks on how we can make that work today.
We are always interested in feedback on how we can do better. Feel free to jump on our slack channel to join the community. We are actively working on the C client. Interested in more details on Python and Java issues as well.
I disagree, either the entropy is drained and the randomness of what the pool returns (based on OS) is compromised or you get stalled waiting for more of it and decrease performance of the generator.
A simple thought exercise to show you how false this is:
Instantiate AES-128-CTR with a random key. Encrypt 100 megabytes with it. Have you "drained" "key" from AES? No, not in any meaningful sense. If you could, the AES-CTR recommendations would say "don't encrypt more than 100 megabytes with AES-CTR". In practice, people routinely encrypt many gigabytes with it.
Every modern OS uses a DRBG for its kernel CSPRNG. You can think of a DRBG as a stream cipher, just like AES-128-CTR, where random events are hashed to form the key, and the output is the raw keystream. Asking for 100 megabytes of additional keystream doesn't "drain" the "key" from the keystream.
Linux has a terrible misfeature, born of a failure to understand the crypto primitives they're using in their CSPRNG. /dev/random keeps an "estimator" that tries to determine how much "entropy" has been fed to the CSPRNG, and it decrements that estimator when requests come in for random bytes. Linux claims you can drain its CSPRNG. Linux is wrong; that is not how crypto works.
Linux should lose that feature, but they will not. But there's no reason you should have to go along with it; just use urandom, which will not block for the estimator.
I think entropy, in however you measure it, does influence something like a UUID generator differently than the cryptographic algorithms. What you are trying to achieve, or more importantly the condition you are trying to avoid, is different.
Yes, but most UUID when moved across networks are translated to hex, so 32 bytes/characters whereas NUID's are 22 bytes, so generally a savings of 10 bytes per when stored as a string.