Ratchet is a good name/pattern. It is also grandfathering.
It is similar to how code coverage can be done. Old coverage may be low e.g. 40%, but may require 80% coverage on new lines, and over time coverage goes up.
I wonder if there has ever been a sneaky situation where someone wanted to use forbiddenFunction() really bad, so they remove the call elsewhere and tidy that up, so they could start using it.
Yep. Grandfathering, deprecation. It's a new implementation of the same concepts.
And ditto for test coverage quality gates. I've seen that pattern used to get a frontend codebase from 5% coverage to >80%. It was just a cycle of Refactor -> Raise minimum coverage requirement -> Refactor again -> Ratchet again, with the coverage gate used to stop new work from bringing down the average.
One would hope code reviews could pick up these deceptions, but then again they would spot the use of forbidden functions too albeit much later in the dev cycle than is optimal. Breaking the build early is a solid idea, before it's even committed to source control. No different to applying PMD, CPD, Checkstyle, eslint, yamllint, other linters, but with a custom rule. I really want to use this pattern, there's semi-deprecated code in so many codebases.
For more control and to close that loophole, it could be possible to put annotations/comments in the code to `/* ignore this line */` in the same way that eslint does? Or have a config that lists how many uses in each file, instead of one-per-project?? There's always refinements, but I'm sure that for many projects the simplicity of one counter is more than enough, unless you have devious developers.
if you have eslint you might as well just write custom rules and get actual syntax aware linting rather than relying on more brittle regex rules. claude et al are very good at getting a lint rule started, with a bit of setup you can make testing lint rules easy. we have a zillion custom rules at notion, most are pretty targeted “forbid deprecated method X besides circumstance Y” kind of things
We use pgmq with the pgmq-go client, and it has clients in many different languages, it's amazing.
The queues persist on disk and visualizations of queues can easily be made with grafana or just pure sql requests.
The fact that the queues lives in the same database as all the other data is also a huge benefit if the 5-15ms time penalty is not an issue.
At least until people - in a couple of years - figure out that the "Postgres for everything" fad was just as much of a bad idea as "MongoDB for everything" and "Put Redis into everything".
It's not "Postgres for everything", it's "Postgres by default". Nobody is saying you should replace your billion-message-per-second Kafka cluster (or whatever) with Postgres, but plenty of people are saying "don't start with a Kafka cluster when you have two messages a day", which is a much better idea than "MongoDB by default".
I'm vibe coding now, after work. I am able to much more quickly explore the landscape of a problem, get into and out of dead ends in minutes instead of wasting an evening. At some point I need to go in and fix, but the benefit of the tool is there. It is like a electric screwdriver vs. normal one. Sometimes the normal one can do things the electric can't, but hell if you get an IKEA deliver you want the electric one.
0. Claude, have a look at frontend project A and backend project B.
1. create a skeleton clone of frontend A, named frontend B, which is meant to be the frontend for backend project B, including the oAuth configuration
2. create the kubernetes yaml and deployment.sh, it should be available under b.mydomain.com for frontend B and run it, make sure the deployment worked by checking the page on b.mydomain.com
3. in frontend B, implement the UI for controller B1 from backend B, create the necessary routing to this component and add a link to it to the main menu, there should be a page /b1 that lists the entries, /b1/xxx to display details, /b1/xxx/edit to edit an entry and /b1/new to create one
4. in frontend B, implement the UI for controller B2 from backend B, create the necessary routing to this component and add a link to it to the main menu, etc.
etc.
All of this is done in 10 minutes. Yeah I could do all of this myself, but it would take longer.
Did you need it though? Like most projects I see being done by people with Claude Code are just their personal projects, which they wouldn't have wasted their time on in the past but now they will get pulled into the terminal thinking its only gonna take 20 mins and they end up burning 100s of subscription dollars on it. If there is no other maintainer & the project is all yours, I dont see any harm in doing it.
I don't NEED this, but turning 2-3 hours fiddling with DTOs, kubernetes yaml, dockerfiles, deployment scripts and other busy work into half an hour does "save you the whole evening" (which our current discussion is about!).
With adults usually having no more than 2-3 hours of free time per work day, this allows you to productively program in your free time without fully burning out.
Also, my company pays for claude and does not give a shit what I do with it.
He brushes over the zoom out, which I think was pretty impressive for a computer of this time. There is a lot of redrawing/recalculating going on there. Would be impressive on a 80s microcomputer.
No, rendering to a vector display (hardware whose primitive operations are points and lines) is almost free for the kind of drawings he was rendering. Zoom is just one linear transformation on each point in the model, no different from panning the view.
It is similar to how code coverage can be done. Old coverage may be low e.g. 40%, but may require 80% coverage on new lines, and over time coverage goes up.
I wonder if there has ever been a sneaky situation where someone wanted to use forbiddenFunction() really bad, so they remove the call elsewhere and tidy that up, so they could start using it.
reply