I don't like it much, using JSON as the transport has some problems if encoded in a URL as required by many auth flows. Paseto encodes the whole version+payload+signature to make it easier to transport. Of course you could just base64 encode the whole Coze JSON, but that isn't part of the spec, which means the spec is weak.
Can't fix problems in a project? Increase the scope to make more problems elsewhere. Soon tentacles emerge, everything has problems, and your project doesn't look as relatively bad.
It doesn't, and the problems have only become more problematic over time, but it's the least bad hypothesis that's broadly accepted. I suspect a generational succession is required for new paradigms to be contemplated.
There are many researchers proposing simpler, novel, and testable solutions that seem to go unnoticed. For example, I'm a fan of Alexandre Deur's work. He has some simple and elegant solutions that I've never seen discussed even though they appear "obvious". For example, from 21 years ago: https://arxiv.org/pdf/2004.05905
That paper is suggesting that one of the reasons why galaxies are spinning faster than some calculations expect is because they're failing to account for the gravitational lensing of gravity itself, which bends gravity down towards the disk.
That paper focuses on rotation curves, like all DM skeptics. I can only assume because this problem is understandable with high school level math. But that's neither the only nor the best evidence for DM. If your new hypothesis doesn't even mention the CMB power spectrum, it's not really worth listening, sorry. And to be taken seriously, it has to explain at least most of the data. DM does that, everything else does not.
I'm just a layman, but in this[1] paper from 2023 Deur and his collaborators took his model[2] and applied it to the Hubble Tension problem. This paper does mention fitting the CMB well (as I understand it), and the model having no Hubble Tension.
I know his work has been contentious in the past, and that his past work has used multiple models that are not entirely compatible for different problems, weakening his claims.
That said, at least from my armchair it seems like a worthwhile direction to pursue.
> If your new hypothesis doesn't even mention the CMB power spectrum
MOND successfully predicted the first peak of the power spectrum. I wonder why everyone focuses so much on LCDM predicting the second peak.
> DM does that, everything else does not.
Whenever someone says DM "does that", it's often after its initial prediction was falsified and the calculation was modified in some way to account for the new observations [1,2]. This has been going on for decades, so that's hardly a ringing endorsement.
I'm not surprised mind you, this is the hallmark of the confirmation bias that's been characteristic of LCDM for decades now.
[2] Not that MOND is a suitable replacement because it too has its problems. My only point is that this tendency to sweep these inconveniences under the rug as if DM is a compelling and successful theory and saying "nothing else does the job" is disingenuous at best. What you should say is that "nothing does, period, not even our best DM theory", because that's the truth.
The phrase "Dark Matter" literally means we don't know and therefore until something testable is postulated and tested (to be fair i believe some candidates have fallen by the wayside over the years as measurement has improved), it's principally equivalent to plugging in a giant X and giving it properties not unlike Fermi's famous elephant curve fitting comment.
Just FYI I have a PhD in cosmology, so no need to explain to me what "Dark Matter" does or doesn't mean, but thanks anyway. It sounds like you saw that video by Angela Collier about how Dark Matter is a set of observations, and while I think it's a good video, it's a bit disingenuous to pretend that working scientists put theories of dark matter and theories of modified gravity in the same category. I know Collier literally says that MOND is a DM theory, but I respectfully disagree, as this does not reflect the reality of the language researchers use. Even if you didn't see that video, my point still stands.
Basically, our equation isn't working, and roughly speaking the equation has gravity on the left hand side and matter content on the right hand side. Matter tells spacetime how to curve and spacetime tells matter how to move, is the old motto. Because the equation isn't working, we have two options: modifying the left hand side or modifying the right hand side (or both). In my perception, researchers refer to the first option as theories of modified gravity, and the other option as theories of dark matter.
Putting both options into one category is over simplifying the situation and isn't helpful.
> I suspect a generational succession is required for new paradigms to be contemplated.
There are a constant stream of new paradigms contemplated (including this one!)
The problem is that they’re contemplated, tested and found wanting.
The notion of dark matter (and dark energy, which is a completely different animal) isn’t hanging around because of stubborn professors or a lack of imagination, it’s because nothing better has come along yet.
The good thing about this theory is that it seems easily testable. Maybe it’ll be different.
On this exact issue my work did extensive testing and researching various standards.
Although we found browsers were out of alignment with standards on all sorts of matters, we found broad compatibility with upper case. (Of course, meaning everything before the path. The interpretation of the path is delegated to the server which may or may not be case sensitive, up until octothorpe, #, which is then solely interpreted by the browser.)
You can switch modes. (Yes that costs a dozen bits if you were otherwise able to stay in the same mode the entire time. Oh well, but I'd say it's worth it to avoid base45.)
And base45 is less efficient than looking at the efficiency of raw alphanumeric.
Alphanumeric is the most efficient QR code encoding mode.
(Just to further make this clear, for QR Byte encoding uses ISO/IEC 8859-1, where 65 characters are undefined, so 191/256, which is ~75%. If character encoding isn't an issue, than byte encoding is the most efficient, 256/256, 100%, but that's a very rare edge case. Also, last time I did the math on Kanji it was about 81% efficient. *I have not dug too deep into Kanji and there may be a way to make it more efficient than I'm aware of. I've never considered it useful for my applications so I have not looked.)
That is a semi-correct calculation of the wrong number. Base45 does not use all 45 characters in every slot. It goes 16 bits at a time, so the character storing the upper bits only has 2^16/45^2 = 33 possible values.
The most straightforward way to measure efficiency is to see that base45 takes 32 source bits, and encodes them into 33 bits. The way you're calculating, that's only 50%
But the better way to calculate efficiency is to take the log of everything (in other words, count how many bits are needed). Numeric is log(1000)/log(1024) which is 99.7%. Alphanum is 99.9%. Base45 is 97%.
And I don't know where that kanji number came from. It stores 13 bits at a time, mapping to 8192 shift-JIS code points, and the vast majority of them are valid. It's pretty efficient.
Huh? I don't necessarily care about an exact "base45", I care about QR code alphanumeric, which just so happens to be a (generic) base 45 character set. For QR code, two characters are encoded into 11 bits.
>in every slot.
I've worked with the QR code standards pretty seriously and I am unfamiliar with the term "slots" being used by the standards. This is why I suspect your referring specifically to RFC base45 (although the term isn't used there either), which QR code doesn't care about.
I also don't care about RFC Base 45 and would prefer to use a more bit space efficient method, such as using the iterative divide by radix method, which I also call "natural base conversion".
> base45 takes 32 source bits
For QR code alphanumeric, 6 characters use 33 bits, not 32.
way to calculate efficiency
The way we calculate this, for example, 2025/2048, we've termed "bit space efficiency". I'm not sure how commonly adopted this term is used in the rest of the industry. On the matter, I thought I had read "the iterative divide by radix algorithm" in industry, but after searching it turns out to be a term novel to our work.
This is also similar to the way Shannon originally calculated entropy and appears to be a fundamental representation of information. Of course log is useful, but it often results in partial bits or rounding, 5.5 in the case of alphanumeric, which is somewhat absurd considering that the bit is the quantum of information, again as shown by Shannon. There is no such thing as a partial bit that can be communicated, since information is fundamental to communication, so the fractional representation we've found to be more informative and easier to work with.
Granted, in all of this, when I have done the math (and I done a lot of math on this particular issue) there appeared to be some very extreme edge cases at the end result of the QR code where some arbitrary data encoded into QR numeric was slightly more efficient than alphanumeric, but overall alphanumeric was more efficient almost all the time. There are other considerations, like padding and escaping, that makes exact calculation more difficult than it's worth. I just needed to "most of the time" calculation and that's where I stopped.
For more detail of my work, my BASE45 predates the RFC by 2 years in 2019, then I published a base 45 alphabet, BASE45, by March 1, 2020, a whole year before the RFC. A patent including BASE45 was submitted June 22, 2021: https://image-ppubs.uspto.gov/dirsearch-public/print/downloa...
Matter of fact, because of the issues and confusion surrounding base conversion, I wrote this tool in 2019:
> Huh? I don't necessarily care about an exact "base45", I care about QR code alphanumeric
> I suspect your referring specifically to RFC base45
> For more detail of my work, my BASE45 predates the RFC by 2 years in 2019
The RFC was linked in the comment I originally replied to. The same comment where you saw the term "base45", because I didn't repeat it in my original reply.
> The way we calculate this, for example, 2025/2048, we've termed "bit space efficiency". I'm not sure how commonly adopted this term is used in the rest of the industry.
It's not a good metric when the size can vary.
3/4 uses 75% of the bit space, and 512/1024 uses 50% of the bit space. But if you give 20 bits to each, the first method can encode 59049 combinations and the second method can encode 262144 combinations.
> which is somewhat absurd considering that the bit is the quantum of information, again as shown by Shannon. There is no such thing as a partial bit that can be communicated, since information is fundamental to communication, so the fractional representation we've found to be more informative and easier to work with.
You can use any base and the math is roughly the same.
Distinguishing between two symbols is just the minimum. You can't transmit .3 bits but you can easily transmit 2.3 bits. If your receiver can distinguish between 5 symbols at full speed then 2.3 bits at a time is the most natural communication method.
> There are other considerations, like padding and escaping, that makes exact calculation more difficult than it's worth. I just needed to "most of the time" calculation and that's where I stopped.
Yeah, that's fine. They're both efficient. My deciding factor is not the tiny difference in efficiency, it's the ill-behaved symbols in alphanumeric.
>You'll create an open-source alternative to a popular cloud service that charges too much, saving fellow hackers thousands in subscription fees while earning you enough karma to retire from HN forever.
This makes zero business sense to me.