I think if people will people know how accessible it is to run local LLMs on their device they will consider buying devices with more memory that will be able to run better models. Local LLMs in the long run are game changers
And my wife, wonderful as always, helped critique the writing! My RadioMenu class's comments (in the "See More: Inline menu example" expando-section) were far worse before she helped.
Cool stuff. The readme is pretty lengthy so it was a little hard to identify what is the core problem this tool is aiming to solve and how is it tackling it differently than the present solutions.
Funnily, AI already knows what stereotypical AI sounds like, so when I tell Claude to write a README but "make it not sounds like AI, no buzzwords, to the point, no repetition, but also don't overdo it, keep it natural" it does a very decent job.
Actually drastically improves any kind of writing by AI, even if just for my own consumption.
I'm not saying it is or isn't written by an LLM, but, Yegge writes a lot and usually well. It somehow seems unlikely he'd outsource the front page to AI, even if he's a regular user of AI for coding and code docs.
Most of the money to be made is by licensing software to organisations that can afford the risk of pirating (practically anything bigger than SMBs: enterprises, governments, armies, etc). The moat of everyone used to your platform worths a lot more. So they just regulate enough so it won’t seem like they don’t give a shit at all.
reply