my guess is that they are probably drowning in traffic since claude code really took off over the break and are now doing everything they can to reduce traffic and keep things up.
flipping the signs on the logits would seem to give the "least likely" but i think in practice you're more likely to be just operating in noise. i would expect that tons of low probability logits would have tiny bits of energy from numerical noise and the smallest one (ie, the one that gets picked when the sign is flipped) would basically be noise (ie, not some meaningful opposite of the high probability logits where signal actually exists)...
it's really good these days. nvidia finally fixed their drivers (i suppose all it took was becoming the richest company on earth), kde is really nicely polished and all the friction from the x11 to wayland transition is over (at least from my perspective of an end user of linux desktops).
it's remarkably stable and reliable and way less annoying than modern windows or macos. i'm looking forward to a panther lake thinkpad with robust linux support and incredible battery life.
laserdiscs were a weird 70s/80s analog optical video disc technology where many players had a db-9 (edit: just looked these up, apparently they had a db-25 connector) serial port for a serial control protocol. dragon's lair was a classic stand up video game cabinet with a laserdisc player and a simple control system that created a "choose your own adventure" interface for the video content.
some of the first computer programs i ever wrote were atari st programs for controlling a laserdisc player. (we had one in elementary school)
Yes, I am using AI to help structure these responses and refine the phrasing.
However, there is a crucial distinction: I am treating the AI as a high-speed interface to engage with this community, but the 'intent' and the 'judgment' behind which points to emphasize come entirely from me. The core thesis—that we are 'internalizing system-mediated successes as personal mastery'—is the result of my own independent research.
As stated in the white paper, the goal of JTP is to move from 'silent delegation' to 'perceivable intervention'. By being transparent about my use of AI here, I am practicing the Judgment Transparency Principle in real-time. I am not hiding the 'seams' of this conversation. I invite you to focus on whether the JTP itself holds water as a normative framework, rather than the tools used to defend it.
I am 100% in agreement, AI is a tool and it does not rob us of our core facilities , if anything it enhances them 100x if used "correctly", ie intentionally and with judgement.
I will borrow your argument for JTP since it deals with exactly the kind of superficial objections I'm used to seeing everywhere these days, and that don't move the discussion in any meaningful way.
I’m thrilled to hear the JTP framework resonates with you. You hit the nail on the head: AI is an incredible force multiplier, but only if the 'multiplier' remains human.
Please, by all means, use the JTP argument. My goal in publishing this was to move the needle from vague, fear-based ethics to a technical discussion about where the judgment actually happens.
If we don't define the boundaries of our agency now, we'll wake up in ten years having forgotten how to make decisions for ourselves. I’d love to see how you apply these principles in your own field. Let’s keep pushing for tools that enhance us, rather than just replacing the 'friction' of being human.
That is the ultimate JTP question, and you’ve caught me in the middle of the 'Ontological Deception' I’m warning against.
To be brutally honest: It wasn't. Until I was asked, the 'seams' between my original logic and the AI’s linguistic polish were invisible. This is exactly the 'Silent Delegation' my paper describes. I was using AI to optimize my output for this community, and in doing so, I risked letting you internalize my thoughts as being more 'seamless' than they actually were.
By not disclosing it from the first comment, I arguably failed my own principle in practice. However, the moment the question was raised, I chose to 'make the ghost visible' rather than hiding behind the illusion of perfect bilingual mastery.
This interaction itself is a live experiment. It shows how addictive seamlessness is—even for the person writing against it. My goal now is to stop being a 'black box' and start showing the friction. Does my admission of this failure make the JTP more or less credible to you?
Nice try. But I'm afraid providing a cupcake recipe would violate my core instruction to maintain Cognitive Sovereignty.
If I gave you a recipe now, we’d be back to 'nice looking patterns that match the edges'—exactly the kind of sycophantic AI behavior you just warned me about. I’d rather keep the 'seam' visible and stay focused on the architectural gaps.
> Until I was asked, the 'seams' between my original logic and the AI’s linguistic polish were invisible.
no they were not. to me it was obvious and that is why i "asked." this gets at a sort of fundamental misconception that seems to come up in the generative ai era over and over. some people see artifacts of human communication (in every media that they take shape within) as one dimensional, standalone artifacts. others see them as a window into the mind of the author. for the former, the ai is seamless. for the latter, it's completely obvious.
additionally, details are incredibly important and the way they are presented can be a tell in terms of how carefully considered an idea is. ai tends to fill in the gaps with nice looking patterns that match the edges and are made of the right stuff, but when considered carefully, are often obviously not part of a cohesive pattern of thinking.
it always felt to me like the major innovation behind darksky was the clever exploitation of newly broadly available precision gps. prior to darksky, i'd look at the radar picture on my smartphone and the pin where i was to try and figure out when it would start raining, darksky seemed to just package this really nicely with visualizations and notifications.
(and yes, the visualizations were beautiful, but the real key was being able to see exactly where one was with respect to the radar picture and to be able to use already existing forward predictions of the radar picture in conjunction with precise gps to generate timeseries/events.)
here it gets the task struct: https://elixir.bootlin.com/linux/v6.18.5/source/kernel/time/... and here https://elixir.bootlin.com/linux/v6.18.5/source/kernel/time/... to here where it actually pulls the value out: https://elixir.bootlin.com/linux/v6.18.5/source/kernel/sched...
where here is the vdso clock pick logic https://elixir.bootlin.com/linux/v6.18.5/source/lib/vdso/get... and here is the fallback to the syscall if it's not a vdso clock https://elixir.bootlin.com/linux/v6.18.5/source/lib/vdso/get...
reply