Hacker Newsnew | past | comments | ask | show | jobs | submit | botacode's commentslogin

What other habits besides flossing are shown to reduce the microbiome diversity?


"In further analyses, the researchers found that, for cancer patients, whether their HDHP had a health savings accounts (HSAs) did not make a difference."


HDHP are a federal requirement if you want a HSA. Full stop. Everyone I work with and nearly everyone I know has one, and no one is dying. The headline is hyperbolic but given the source I'm not surprised.


Load just makes LLMs behave less deterministically and likely degrade. See: https://thinkingmachines.ai/blog/defeating-nondeterminism-in...

They don't have to be malicious operators in this case. It just happens.


> malicious

It doesn't have to be malicious. If my workflow is to send a prompt once and hopefully accept the result, then degradation matters a lot. If degradation is causing me to silently get worse code output on some of my commits it matters to me.

I care about -expected- performance when picking which model to use, not optimal benchmark performance.


Non-determinism isn’t the same as degradation.

The non-determinism means that even with a temperature of 0.0, you can’t expect the outputs to be the same across API calls.

In practice people tend to index to the best results they’ve experienced and view anything else as degradation. In practice it may just be randomness in either direction from the prompts. When you’re getting good results you assume it’s normal. When things feel off you think something abnormal is happening. Rerun the exact same prompts and context with temperature 0 and you might get a different result.


This has nothing to do with overloading. The suspicion is that when there is too much demand (or they just want to save costs), Anthropic sometimes uses a less capable (quantized, distilled, etc) version of the model. People want to measure this so there is concrete evidence instead of hunches and feelings.

To say that this measurement is bad because the server might just be overloaded completely misses the point. The point is to see if the model sometimes silently performs worse. If I get a response from "Opus", I want a response from Opus. Or at least want to be told that I'm getting slightly-dumber-Opus this hour because the server load is too much.


“Just drink the water, it’s all water.”


this is about variance of daily statistics, so I think the suggestions are entirely appropriate in this context.


The question I have now after reading this paper (which was really insightful) is do the models really get worse under load, or do they just have a higher variance? It seems like the latter is what we should expect, not it getting worse, but absent load data we can't really know.


Explain this though. The code is deterministic, even if it relies on pseudo random number generation. It doesn't just happen, someone has to make a conscious decision to force a different code path (or model) if the system is loaded.


Its not deterministic. Any individual floating point mul/add is deterministic, but in a GPU these are all happening in parallel and the accumulation is in the order they happen to complete.

When you add A then B then C, you get a different answer than C then A then B, because floating point, approximation error, subnormals etc.


It can be made deterministic. It's not trivial and can slow it down a bit (not much) but there are environment variables you can set to make your GPU computations bitwise reproducible. I have done this in training models with Pytorch.


There are settings to make it reproducible but they incur a non-negligible drop in performance.

Unsurprising given they amount to explicit synchronization to make the order of operations deterministic.



For all practical purposes any code reliant on the output of a PRNG is non-deterministic in all but the most pedantic senses... And if the LLM temperature isn't set to 0 LLMs are sampling from a distribution.

If you're going to call a PRNG deterministic then the outcome of a complicated concurrent system with no guaranteed ordering is going to be deterministic too!


No, this isn't right. There are totally legitimate use cases for PRNGs as sources of random number sequences following a certain probability distribution where freezing the seed and getting reproducibility is actually required.


And for a complicated concurrent system you can also replay the exact timings and orderings as well!


That's completely different from PRNGs. I don't understand why you think those things belong together.


How is this related to overloading? The nondeterminism should not be a function of overloading. It should just time out or reply slower. It will only be dumber if it gets rerouted to a dumber, faster model eg quantized.


Temperature can't be literally zero, or it creates a divide by zero error.

When people say zero, it is shorthand for “as deterministic as this system allows”, but it's still not completely deterministic.


Zero temp just uses argmax, which is what softmax approaches if you take the limit of T to zero anyway. So it could very well be deterministic.


Floating point math isn't associative for operations that are associative in normal math.


That would just add up to statistical noise instead of 10% degradation over a week.


Catastrophic error accumulation can produce more profound effects than noise.


Just to make sure I got this right. They serve millions of requests a day & somehow catastrophic error accumulation is what is causing the 10% degradation & no one at Anthropic is noticing it. Is that the theory?


FYI something in that region happened last august/September. Some inference bug triggered worse performance on TPUs vs GPU.


There's a million algorithms to make LLM inference more efficient as a tradeoff for performance, like using a smaller model, using quantized models, using speculative decoding with a more permissive rejection threshold, etc etc


It takes a different code path for efficiency.

e.g

if (batch_size > 1024): kernel_x else: kernel_y


The primary (non malicious, non stupid) explanation given here is batching. But I think you would find looking at large-scale inference the batch sizes being ran on any given rig are fairly static - there is a sweet spot for any given model part ran individually between memory consumption and GPU utilization, and generally GPUs do badly at job parallelism.

I think the more likely explanation is again with the extremely heterogeneous compute platforms they run on.


That's why I'd love to get stats on load/hardware/location of where my inference is running. Looking at you Trainiuim.


Why do you think batching has anything to do with the model getting dumber? Do you know what batching means?


Well if you were to read the link you might just find out! Today is your chance to be less dumb than the model!


I checked the link, it never says that the model's prediction get lower quality due to batching, just nondeterministic. I don't understand why people conflate these things. Also it's unlikely that they use smaller batch sizes when load is lower. They just likely spin up and down GPU serves based on demand, or more likely, reallocate servers and gpus between different roles and tasks.


It's very clearly a cost tradeoff that they control and that should be measured.


Excellent, level headed, read that appropriately acknowledges that we live in a world ultimately bounded by physics that (at some point) no amount of money or human attention can overcome.


It is very unclear if fusses are real at this stage of the AI hype cycle or just driven by increasing Twitter group think.

Give it a few weeks.


u go on twitter and everyone is talking about it


This is a great perspective, and one that I've come around to after years of being anxious about being perceived online.

The shift has been very rewarding and has opened up tons of opportunities and partnerships both personally and for my startup.

The snark:signal ratio section rings very true.

For the more LLM-pilled would also recommend reading the Gwern piece on writing online: https://gwern.net/llm-writing


Great article that concretizes a lot of intuitions I've had while vibe coding in Elixir.

We don't 100% AI it but this very much matches our experience, especially the bits about defensiveness.

Going to do some testing this week to see if a better agents file can't improve some of the author's testing struggles.


A hacker seeking to change a political system-- independent of alignment-- would be well advised to take an approach that is almost the exact inverse of this project's.

The research on getting people to change political attitudes or engage in pro-social political behaviors says that public shame, especially amongst their friends/communities/families, is the most effective lever available.

So, instead of making a list of everyone who believes in $prosocial_behaviors_and_policies create a publicly searchable, and verifiable database of folks engaged in $anti_social_behaviors_and_policies that are destructive to the their communities.

Better transparency into where everyone stands helps to prevent toxic policies and rhetoric that poison the commons and allows communities (teammates, employers, friends, and family-- both present and future) to then apply social pressure or the threat of ostracism in order to generate meaningful change.

There's a reason that bad actors (of all stripes and political affiliations) fear transparency! It's a highly effective tool for aligning behavior with societal/community values.


Getting a 403 when I try to read. Anyone have a backup link?


This is a really cool small scale experiment. More of this type of work is need so we can have fairer and more transparent search options.

I'd be curious to see the sensitivity of the ratings to things like rater composition (is it a quirk of Redditors they like Bing better?) and search topics.

Also makes me wonder how much of the ratings rank to do with a decline in quality (or diversification/drift away from the Redditor vector) of Google's own search raters (the index is heavily influenced by manual rating).


It'd be very interesting to run the same experiment except rather than on different engines, just on different versions of Google in different years. Sadly I can't really search 2005 Google anymore of course but it would be interesting nonetheless to see if it's Google in general or rather the recent "enshittification" of their results. It may be intriguing to expand to survey takers outside of Reddit to get a potentially less biased set of results, yes.


Wonder if you could reconstruct synthetic versions of Gsearch with archive pages. I'd guess SEO/SEM companies have the data to build a small version of this and track the changes over time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: