It or its variants probably contains PFAS which probably makes it hazardous to spray. Also, I suspect that breathing its ambient vapor while spraying it is is bad for the body and brain.
Canola oil works in practice for basic tasks, but requires routine reapplication.
WD-40 classic does not contain PFAS. Which is not to say you should breath it in.
> Canola oil works super well in practice without any of these risks.
I cannot advise enough against using canola oil for most lubrication purposes. It's biodegradable and will break down (good for some applications) but for the most part oil breaking down is a bad thing if you want to keep something well maintained. It would gum up over time, start reacting chemically with dust or other chemicals, and potentially even cause damage. Especially if you lubricate to prevent rust.
Also, in the context of breaking loose bolts, oil alone doesn't have any capacity to break up or penetrate rust.
I have used it on doors for years with zero trouble. Granted, I have to reapply every four months. It is infinitely safer than the toxic brew that is WD40.
Do not use canola oil for most lubrication tasks. You should almost always be using lithium grease.
Spray on white lithium grease works for most "architectural" or furniture uses (ex: door hinges, gas springs on chairs, garage door rails and chain, etc).
For anything constantly moving (ex: gearboxes or bearings) you want a more viscous lithium grease (ex: red n tacky or lucas xtra/green).
But in pretty much every situation (on land) you want to be using a form of lithium grease if you want to actually keep the interface lubricated.
I think it would be worth it to investigate cyanobacteria toxins in water over there as they can cause similar symptoms. Next thing to check would be local sea food. I feel like glyphosate is a red herring here. Heavy metals could come from frequently eating local fish/shellfish.
Some cell and animal studies show that there is a slight possible effect. It hasn't been shown in humans, and even in extrapolation from animals, the protective benefit does not seem particularly significant.
Let's spend years plugging holes in V8, splitting browser components to separate processes and improving sandboxing and then just plug in LLM with debugging enabled into Chrome. Great idea. Last time we had such a great idea it was lead in gasoline.
It's clear the endgame is to cook AI into Chrome itself. Get ready for some big antitrust lawsuit that settles in 20 years when Gemini is bundled too conveniently and all the other players complain.
This made me want to laugh so hard. I think this idea came from the same place as beta testing “Full Autopilot” with human guinea pigs. Great minds…
Jokes aside, Anthropic CEO commands a tad more respect from me, on taking a more principals approach and sticking to it (at least better than their biggest rival). Also for inventing the code agent in the terminal category.
All things considered Anthropic seems like they’re doing most things the right way, and seemed to be focused on professional use more than OpenAI and Grok, and Opus 4.5 is really an incredibly good model.
Yes, they know how to use their safety research as marketing, and yes, they got a big DoD contract, but I don’t think that fundamentally conflicts with their core mission.
And honestly, some of their research they publish is genuinely interesting.
Dario is definitely more grounded than Sam, I thought Anthropic would get crowded out between Google and the Chinese labs, but they might be able to carve out a decent niche as the business focused AI for people who are paranoid about China.
They didn't invest terminal agents really though, Aider was the pioneer there, they just made it more autonomous (Aider could do multiple turns with some config but it was designed to have a short leash since models weren't so capable when it was released).
I acknowledged the point about Aider being the first terminal agent in a different comment. I am equally surprised at how well Anthropic has done compared to rest of the pack (Mistral comes to mind, had a head start but seems to have lost its way.
They definitely have found a good product-market fit with white collar working professional. 4.5 Opus gets the best balance between smarts and speed.
Aider was designed to do single turns becasue LLMs were way worse when it was created. That being said, Aider could do multiple turns of tool calling if command confirmation was turned off, and it was trivial to configure Aider to do multiple turns of code generation by having a test suite that runs automatically on changes and telling Aider to implement functionality to get the tests to pass. It's hard coded to only do 3 autonomous turns by default but you can edit that.
Yes but unfortunately it appears that Aider development has completely stopped. There had been an MCP support PR that was open for over half a year, many people validated it and worked on it but the project owner never responded.
It’s a bit of a shame, as there are plenty of people that would love to help maintain it.
I would love to know more. I used aider with local models and it behaved like cursor in agent mode. Unfortunately I dont remember exactly when (+6 months ago at least). What was your experience with it?
I was a heavy user, but stopped using it mid 2024. It was essentially providing codebase context and editing and writing code as you instructed - a decent step up from copy/paste to ChatGPT but not working in an agentic loop. There was logic to attempt code edits again if they failed to apply too.
Edit: I stand corrected though. Did a bit of research and aider is considered an agentic tool by late 2023 with auto lint/test steps that feedback to the LLM. My apologies.
Anthropic isn't any more moral or principled than the other labs, they just saw the writing on the wall that they can't win and instead decided to focus purely on coding and then selling their shortcomings as some kind of socially conscious effort.
It's a bit like the poorest billionaire flexing how environmentally aware they are because they don't have a 300ft yacht.
Maybe - they’ve certainly fooled me if that’s the case. I took them at face value and so far they haven’t done anything out of character that would make me weary of them.
Their models are good. They did not use prompts for training from day one (Google is the worst offender here amongst the three). Have been shockingly effective with “Claude Skills”. Contributed MCP to the world and encouraged its adoption. Now did the same for skills, turning it into a standard.
They are happy to be just the tool that helps people get the job done.
The thing AI miss about the internet from the late 2000s and early 2010s was having so much useful data available, searchable, and scrappable. Even things like "which of my friends are currently living in New York?" are impossible to find now.
I always assumed this was a once-in-history event. Did this cycle of data openness and closure happen before?
Do you mean you let Claude Code and other such tools act directly on your personal or corporate machine, under your own account? Not in an isolated VM or box?
Why not? The individual grunt knows it is more productive and the managers tolerate a non-zero amount of risk with incompetent or disgruntled workers anyways.
If you have clean access privileges then the productivity gain is worth the risk, a risk that we could argue is marginally higher or barely higher. If the workplace also provides the system then the efficiency in auditing operations makes up for any added risk.
You are mean to lead - it solved serious issues with engines back then and enabling their use in many useful way, likely saving more people than it poisoned.
Innovation in the short term might trump longer term security concerns.
All of these have big warning labels like it's alpha software (ie, this isn't for your mom to use). The security model will come later... or maybe it will never be fully solved.
Right, I was hoping not to use them at a previous company that used AWS. One day we got hit by a DDOS (trying to get us to pay to stop it). Even with AWS WAF costing $0.60 per million requests, we ended up paying around $10k in WAF rules to block the attack. Yes, hundreds of thousands to millions of reqs/sec. Luckily the attacker had their entire botnet using an Accept-Language header from a specific (non-english) language, which made it an easy rule target. If it wasn't for that, I'm not sure what we would have done. Would actually love to hear what others do, I want the answer to be more than "use CloudFlare", but it's the only option I've found since then.
Funny it seems like I only ever see their captchas for companies that don't care about customer experience or have naively set up their online tools. Most major websites don't use them (or have sane settings that don't trigger captchas), commercial or not. It's usually mid-tier companies that I don't care enough to wait for the page load, or monopolies that actively hate their customer (e.g Canadian telco) where they are most prominent.
reply