Ctrl-Z suspends the program in most UNIX shells. ("fg" to resume)
Ctrl-S may or may not end up stopping the program, depending on how much it's printing, and how much output buffering there is before it blocks on writing more.
All my shell RCs turn off xon/xoff -- that's a relic from the PDP-11 days we can all do without.
Windows has the Scroll Lock button that's supposed to do this if you need it, but typically, just selecting a character in a terminal emulator will stop the scroll while still buffering the output.
I don't think this analysis matches the underlying implementation.
The width of the models is typically wide enough to "explore" many possible actions, score them, and let the sampler pick the next action based on the weights. (Whether a given trained parameter set will be any good at it, is a different question.)
The number of attention heads for the context is similarly quite high.
And, as a matter of mechanics, the core neuron formulation (dot product input and a non-linearity) excels at working with ranges.
No the widths are not wide enough to explore. The number of possible game states can explode beyond the number of atoms in the universe pretty easily, especially if you use deep stacks with small big blinds.
For example when computing the counterfactual tree for 9 way preflop. 9 players have up to 6 different times that they can be asked to perform an action (seat 0 can bet 1, seat 1 raises min, seat 2 calls, back to seat 0 raises min, with seat 1 calling, and seat 2 raising min, etc). Each of those actions has check, fold, bet min, raise the min (starting blinds of 100 are pretty high all ready), raise one more than the min, raise two more than the min, ... raise all in (with up to a million chips).
(1,000,000.00 - 999,900.00) ^ 6 times per round ^ 9 players That's just for pre flop. Postflop, River, Turn, Showdown. Now imagine that we have to simulate which cards they have and which order they come in the streets (that greatly changes the value of the pot).
As for LLMs being great at range stats, I would point you to the latest research by UChicago. Text trained LLMs are horrible at multiplication. Try getting any of them to multiply any non-regular number by e or pi. https://computerscience.uchicago.edu/news/why-cant-powerful-...
Don't get what I'm saying wrong though. Masked attention and sequence-based context models are going to be critical to machines solving hidden information problems like this. Large Language Models trained on the web crawl and the stack with text input will not be those models though.
Tool using LLMs can easily be given a tool to sample whatever distribution you want. The trick is to proompt them when to invoke the tool, and correctly use its output.
Huh? It sounds to me like this is arguing one should be OK with /r/conservative doing it (and joining up, even) but then not OK that other subs do it, too.
That doesn't really pass the sniff test, so maybe I'm missing something.
I'm more trying to say, if you find it wrong that r/conservative does it, then you shouldn't do it yourself. Other people's bad behaviour should not be a justification for you own.
When it comes to morality, we can't control how other people act, we can only control what we ourselves do.
Especially when the "retaliation" is aimed at members and not the people implementing the mod policy.
No, I mean Luau itself. I distribute Luau as dll by default, but without the analysis module, to keep it small (~600KB). If users want to have analysis, they need to build Luau from source. Bindings are not an issue, since they use only C API from Luau, which is fast to compile.