>Anything that has a kernel level anti check (Valorant) will always be a resounding No.
Please stop repeating this long outdated information. The two most widely used kernel anti cheat provider easyanticheat and battle eye support linux with a user space component which needs to be enabled by the developer and has been in many games.
Tools like Battle eye and EAC are not just one tool that gives a binary answer, they are tools that detect a huge range of heuristics about the device and how easy it is to interfere with the memory.
While they have been ported to Linux, an awful lot of those bits of telemetry simply don't give the desired answer, or even any answer at all, because that is very hard to so when there aren't proprietary drivers signed down to the hardware root of trust by a third party (and certainly the average Linux user on HN wouldn't want there to be!).
It's really not a matter of "enabled by the developer", it's entirely dependent on what your threat model is.
None of this is relevant to the original point of "kernel anti cheats don't work" when yes the two most widely used ACs do work despite being kernel level.
>It's really not a matter of "enabled by the developer", it's entirely dependent on what your threat model is.
Do you have any plans for the target to be S3 compatible in addition to a posix file system? If I wanted to sync a YouTube channel to a Backblaze B2 bucket or Minio, for example.
I solve this by Syncthing running on all clients. Very rarely do I ever have a problem with conflicts. Only if I add a new pass while my phone is offline and then make another edit on my computer would there be an issue. I think it only happened once, and that was because I did it on purpose to see what happened.
Turns out syncthing creates a .conflict file and then I tell keepassxc to do a merge on the two files and then we are back to normal.
(side note) And the folding phone will be a "Apple First".
I wonder if they still still have a stupid camera notch on the device. They is no point (to me) have a thin phone even you end up having a 5mm notch the size of your phone
I used to joke about prompt engineering. But by jiminy it is a thing now. I swear sometimes I waste a good 10-20minutes writing up a good prompt and initial plan just so that claudecode can systematically implement something.
My usage is nearly the same as OP. Plan plan plan save as a file and then new context and let it rip.
That's the one thing I'd love, a good cli (currently using charm and cc) which allows me to have an implementation model, a plan model and (possibly) a model per sub agent. Mainly so I can save money by using local models for implementation and online for plans or generation or even swapping back. Charm has been the closest I've used so far allowing me to swap back and forth and not lose context. But the parallel sub-agent feature is probably one of the best things claudecode has.
(Yes I'm aware of CCR, but could never get it to use more than the default model so :shrug:)
> I used to joke about prompt engineering. But by jiminy it is a thing now.
This is the downside of living in a world of tweets, hot takes and content generation for the sake of views. Prompt engineering was always important, because GIGO has always been a ground truth in any ML project.
This is also why I encourage all my colleagues and friends to try these tools out from time to time. New capabilities become aparent only when you try them out. What didn't work 6mo ago has a very good chance of working today. But you need a "feel" for what works and what doesn't.
I also value much more examples, blogs, gists that show a positive instead of a negative. Yes, they can't count the r's in strawberry, but I don't need that! I don't need the models to do simple arithmetic wrong. I need them to follow tasks, improve workflows and help me.
Prompt engineering was always about getting the "google-fu" of 10-15 years ago rolling, and then keeping up with what's changed, what works and what doesn't.
Projects using AI are the best documented and tested projects I worked on.
They are well documented because you need context for the LLM to be performant. And they are well tested because the cost of producing test got lower since they can be half generated, while the benefit of having tests got higher, since they are guard rails for the machine.
People constantly say code quality is going to plummet because of those tools, but I think the exact opposite is going to happen.
The difference today is the docs are being read. In the before times, unless you were building a foundational library, docs would get updated when a presentation was needed to announce a great success, or maybe not even then. Nowadays if you want coding agents to be efficient, doc quality is paramount.
IOW there’s very clear ROI on docs today, it wasn’t so earlier.
honestly "prompt engineering" is just the vessel for architecting the solution. its like saying "diagram construction" really took off as a skill. its architecting with a new medium
reply