I don't think we won't get AGI if Anthropic were to implode, and frankly, right now, I'd rather have someone say clearly, "They cannot stomach the existence of someone telling them 'No' or adhering to moral principles. Like spoiled children they can't hear the former and are terrified by later because it might expose them to the condemnation they deserve."
TLDR (story, not math) - Knuth poses a problem, his friend uses Claude to conduct 30 some explorations, with careful human guidance, and Claude eventually writes a Python program that can find a solution for all odd values. Knuth then writes a proof of the approach and is very pleased by Claude's contribution. Even values remain an open question (Claude couldn't make much progress on them)
I think this is pretty clearly an overstatement of what was done. As Knuth says,
"Filip told me that the explorations reported above, though ultimately successful, weren’t really smooth.
He had to do some restarts when Claude stopped on random errors; then some of the previous search results
were lost. After every two or three test programs were run, he had to remind Claude again and again that
it was supposed to document its progress carefully. "
That doesn't look like careful human guidance, especially not the kind that would actually guide the AI toward the solution at all, let alone implicitly give it the solution — that looks like a manager occasionally checking in to prod it to keep working.
looks like he is trying to make a point that the actual (formal) proof for 2Z + 1 (odd numbers) is still human - by himself that is. Not sure who came up with the core modular arithmetic idea of with s = 0 k increasing by 2 mod m.
Totally reasonable project for many reasons but fast tools for AI always makes me chuckle. Imagine your job is delivering packages and along the delivery route one of your coworkers is a literal glacier. It doesn't really matter how fast you walk, run, bike, or drive. If part of your delivery chain tops out at 30 meters per day you're going to have a slow delivery service. The ratio between the speed of code execution and AI "thinking" is worse than this analogy.
The crucial thing is that Tesla's valuation has the hype projects baked in. The fact that it never delivered self driving or a robotaxi fleet and is now being saved solely by an import ban on Chinese EVs means that any success he had with Tesla is now an illusion.
There is another way to view this. FSD plays fast and loose because they are constantly iterating. The culture at Musk co is that if you dont' keep pushing updates you are in trouble so do we really want to trust that each of his numerous updates are truly tested? This guy is a pathological liar after all. How many lawsuits are they dealing with now?
Supercruise only runs on pre mapped routes. If my life is on the line, I'd rather take the pre mapped routes and supercruise design is better at preventing people playing games to defeat the system (ex.shoving an orange in the steering wheel) so I know that others using the system on the road are following the system guidelines.
Supercruise may not do everything FSD does but it cuts out a large portion of the "fatigue" portion of driving and as a result can be highly trusted value add.
They rolled out full driverless in Austin in November 2025 and there's a website that reverse engineered the mobile app API to track the active cars. It found 90 active in Austin with more declared total by Tesla and 150 active in SF (SF ones have a safety driver for now). Likewise they found around 300 active in SF for Waymo with around 1000 cars declared total by Waymo itself.
While this seems to detect posture fairly well, the screen blurring doesn't work for me despite allowing what appear to be the relevant permissions. (macOS 15.1)
This seems to be the classic discussion over what counts as reliable. Humans aren't particularly reliable, and as any hardware engineer knows, even if you have provably correct algorithms your software system can never be 100% reliable because cosmic rays and spilled coffee. You can get close via herculean efforts in software and hardware co-design but never all the way. To try to pierce the hype of AI agents without allowing for the surprisingly low bar set by humans across a large array of tasks is to miss the forest for the trees.
This is a crime. It is an unlawful act of aggression which may defacto trigger an international armed conflict. There will be paper thin justifications of course but those are merely to give loyalists talking points and a thread to grasp to in their mental gymnastics.
Pages like this are why I love Firefox reader mode. It doesn't matter what font crimes the author commits, with a single click it becomes legible again! Good content should never be missed because of an author trying to stab you in the eyeballs.
Honestly I don't even try to read pages that have some super narrow 400px layout any more. Time was I would screw around editing the CSS with dev tools, but I just don't have the patience these days. It's a lot of work to try to alter the CSS and it's unpleasant as heck to read something that narrow (anything less than 1000px is awful to read and I prefer 1200), so I just move on.
Pro tip for web designers out there: if someone wants a narrow layout they can always make their browser window smaller, but if you force it to be narrow that screws over the users who find that unpleasant. A wider layout can thus work for both types of reader, while a narrow layout only works for one.
not only vitamin c but fruits containing oxalic acid if I read that right. But I'm far more interested in when such contrast agents are warranted, because I'm not aware that in Europe that contrast agent would be used that much for MRI
For your anecdata I'm in Sweden and definitely had a contrast agent (presumably gadolinium based) for a recent MRI of my gallbladder/pancreas/liver area.
> previous research has shown that even in those with no symptoms, gadolinium particles have been found in the kidney and the brain and can be detected in the blood and urine years after exposure
So do you stop eating high oxilate and high vitamin C food for a year after the MRI? Are there foods or drinks that help flush gadolinium?
I have one planned soon. Of course the prescribing doctor didn't mention any of this, but I guess the research is still too fresh. Thanks for raising awareness.
It depends on the indication for the scan. Some indications do not require contrast, others MUST have contrast in order to have any value. If you refuse contrast without understanding the reason, you may be simply wasting your time and money.
reply