Rust's discussion boards has an idea of "keyword generics" for expressing some of these concepts. The idea is that a function can be generic over const, async or some other keyworded effect. I like this description. It shows the benefits without too much theory.
Yours might go a little less into the details, but its really clear and I like the diagrams and explanation around glitch hazards. Please do follow up on your tangents if you have time.
I have commented once or twice on articles being AI generated. I don't put them when I think the writer used AI to clean up some text. I added them when there are paragraphs of meaningless or incorrect content.
Formats, name collisions or back-button breakage are tangential to the content of the article. Being AI generated isn't. And it does add to the overall HN conversation by making it easier to focus on meaningful content and not AI generated text.
Basically, if the writer didn't do a good job checking and understanding the content we shouldn't bother to either.
The fascinating paradox: there are clearly "tells" (slop-smells, like code-smells?) of LLM-generated text. We're all developing heuristics rapidly, which probably pass a Pepsi challenge 95+% of the time.
And yet: LLMs are writing entirely based on human input. Presumably there exists a great quantity of median representative text, some lowest-common denominator, of humans who write similarly to these heuristics.
(In particular: why are LLMs so fond of em-dashes, when I'm not sure I've ever seen them used in the wilds of the internet?)
An LLM was used to help polish the text, and that section probably got over-polished. The ideas and code are authentic. The article is intentionally detailed for those who want the full reasoning, but you can always skip straight to the source code.
Out of curiosity what particular part of the original text needed to be polished and why couldn’t the writer accomplish said polish without a language model?
When writing the articles in that series, the focus was more on getting the technical ideas and details right, not on spelling, grammar and text flow in which LLMs excel. That specific section "Why this maps to Genetic Algorithms?" makes the point that the fit of Genetic Algorithms to state space exploration is not a coincidence, and argues that the evolutionary process itself is a state space exploration algorithm that allows a given species to advance further down life's timeline. But thanks for the question, I do agree that as LLM generated text becomes more ubiquitous everywhere, we all do appreciate more our own human writing style even for the highly technical text where the focus is on the technical ideas and not so much on the presentation language itself.
By the definition of a cryptographically secure PRNG, no. They, with overwhelming probability, produce results indistinguishable from truly random numbers no matter what procedure you use to tell them apart.
I worked on a team deploying a service to European Sovereign Cloud (ESC). Disclaimer - I am a low level SDE and all opinions are my own.
AWS has set up proper boundaries between ESC and global AWS. Since I'm based out of the US I can't see anything going on in ECS even in the service we develop. To fix an issue there we have to play telephone with an engineer in ESC where they give us a summary of the issue or debug it on their own. All data is really 100% staying within ESC.
My guess is that ESC will be less reliable than other regions, at least for about a year. The isolation really slows down debugging issues. Problems that would be fixed in a day or two can take a month. The engineers in ESC don't have the same level of knowledge about systems as the teams owning them. The teething issues will eventually resolve, but new features will be delayed within the region.
If youre a current AMZN employee you may want to delete or heavily edit this post. Go check your employers “social media policy.” Historically commenting on operational or internal aspects without PR approval was prohibited.
While it’s good to remain anonymous to avoid reprisals , once that’s done no one should care about upsetting their employer in an open forum. Despite what a corporation says they don’t own you, your thoughts or your voice.
Still it sounds like it would be the optimal choice for a redundancy zone in some senses since its probably not going to have any accidental dependency on us-east-1.
The consequence of Noether's theorem is that if a system is time symmetric then energy is conserved. On a global perspective, the universe isn't time symmetric. It has a beginning and an expansion through time. This isn't reversible so energy isn't conserved.
Please explain. Noether's theorem equates global symmetry laws with local conservation laws. The universe does not in fact have global symmetry across time.
You are making the same mistake as OP. Formal models and their associated ontology are not equivalent to reality. If you don't think conservation principles are valid then write a paper & win a prize instead of telling me you know for a fact that there are no global symmetries.
RL before LLMs can very much learn new behaviors. Take a look at AlphaGo for that. It can also learn to drive in simulated environments. RL in LLMs is not learning the same way, so it can't create it's own behaviors.
reply