It seems like the argument is roughly: we used to use CMS because we had comms & marketing people who don't know git. But we plan to replace them all with ChatGPT or Claude, which does. So now we don't need CMS.
(I didn't click through to the original post because it seems like another boring "will AI replace humans?" debate, but that's the sense I got from the repeated mention of "agents".)
Cursor replaced their CMS because Cursor is a 50-people team shipping content to one website. Cursor also has a "Designers are Developers" scenario so their entire team is well versed with git.
This setup is minimal and works for them for the moment, but the author argues (and reasonably well enough, IMO) that this won't scale when they have dedicated marketing and comms teams.
It's not at all about Cursor using the chance to replace a department with AI, the department doesn't exist in their case.
> Lee's argument for moving to code is that agents can work with code.
So do you think this is a misrepresentation of Lee's argument? Again, I couldn't be bothered to read the original, so I'm relying on this interpretation of the original.
There's no sense in answering your questions when you actively refuse to read the article. You're more susceptible to misunderstand the arguments given your apparent bias on AI-motivated downsizing, which I must reiterate is not covered in the article at all.
Alright you badgered me into reading the original and the linked post does not misinterpret it.
> Previously, we could @cursor and ask it to modify the code and content, but now we introduced a new CMS abstraction in between. Everything became a bit more clunky. We went back to clicking through UI menus versus asking agents to do things for us.
> With AI and coding agents, the cost of an abstraction has never been higher. I asked them: do we really need a CMS? Will people care if they have to use a chatbot to modify content versus a GUI?
> For many teams, the cost of the CMS abstraction is worth it. They need to have a portal where writers or marketers can log in, click a few buttons, and change the content.
> More importantly, the migration has already been worth it. The first day after, I merged a fix to the website from a cloud agent on my phone.
> The cost of abstractions with AI is very high.
The whole argument is about how it's easier to use agents to modify the website without a CMS in the way.
This is an AI company saying "if you buy our product you don't need a CMS" and a CMS company saying "nuh-uh, you still need a CMS".
The most interesting thing here is that the CMS company feels the need to respond to the AI company's argument publicly.
> This is an AI company saying "if you buy our product you don't need a CMS"
No, it isn't. The AI company was explicit about their use case not being a general one:
> "For many teams, the cost of the CMS abstraction is worth it. They need to have a portal where writers or marketers can log in, click a few buttons, and change the content. It’s been like this since the dawn of time (WordPress)."
> Alright you badgered me into reading the original
It's not "badgering" you to point out that your comments are pointless if they're just going to speculate about something you haven't read. But if you feel "badgered", you could just not comment next time, that way no-one will "badger" you.
I don't think that's the argument. The argument is that comms and marketing people don't know git, but now that they can use AI they will be able to use tools they couldn't use before.
Basically, if they ask for a change, can preview it, ask for follow ups if it's not what they wanted, then validate it when it's good, then they don't need a GUI.
And the reason is that those products are (rightly) regulated. Would there be beer marketed to kids if it were legal? Would it be fine if it were the parents' sole responsibility to ensure their kids weren't drinking beer, including at school, at friends' homes where the parents may have different rules, etc., absent a general social consensus that kids shouldn't have beer?
This is anecdotal evidence for the emerging consensus that social media is bad for you and especially for kids. There's a legitimate question whether the people pushing these products know this and don't care or actively suppress evidence.
Tobacco companies famously did this and it caused a lot of harm. It's about that more than just a chance for a cheap shot "hypocrisy" accusation.
I think social media has clear positive and negative aspects. That makes it closer to food than cigarettes in my mind.
We can all immediately conjure up images where food or social media has brought something positive into our lives.
News.yc is something I visit almost every day and it has added value to my life, including introducing me to a few people I’ve met in real life and to interesting tech.
Equally, we can all pretty readily conjure up images where excess food or social media has harmed people.
Indeed, it's still not exactly clear what the right place of social media in society is. Perhaps we could even get rid of some of its pernicious aspects without throwing the baby out with the bathwater.
Even food is not unregulated! And not because too much food is bad for you, but because bad food can harm you.
A disanalogy with food is that there are natural limits to how much food you can/want to eat at one time. Another is that food is necessary for life. Neither is true of social media.
You're absolutely right! Tell me more about how ironic is how the post about having a unique voice is written in one-sentence-paragraph LinkedIn clickbait style.
The idea that there is some significant, load-bearing distinction in meaning between "ethical" and "moral" is something I've encountered a few times in my life.
In every case it has struck me as similar to, say, "split infinitives are ungrammatical": some people who pride themselves on being pedants like to drop it into any conversation where it might be relevant, believing it to be both important and true, when it is in fact neither.
I was hoping to point more towards "don't suppress a viewpoint, rather discuss it" and less toward semantics. I guess I should have illuminated that in my above comment.
Do you go through the haystack yourself first, find the needle, and then use that to validate your hypothesis that the AI is good at accomplishing that task (because it usually finds the same needle)? If not, how do you know they're good at the task?
My own experience using LLMs is that we frequently disagree about which points are crucial and which can be omitted from a summary.
It depends on how much time I have, and how important the task is. I've been surprised and I've been disappointed.
One particular time I was wrestling with a CI/CD issue. I could not for the life of me figure it out. The logs were cryptic and there was a lot of them. In desperation I pasted the 10 or so pages of raw logs into ChatGPT and asked it to see if it can spot the problem. It have me three potential things to look at, and the first one was it.
By directing my attention it saved me a lot of time.
At the same time, I've seen it fail. I recently pasted about 10 meetings worth of conversation notes and asked it to summarize what one person said. It came back with garbage, mixed a bunch of things up, and in general did not come up with anything useful.
In some middle-of-the road cases, what you said mirrors my experience: we disagree what is notable and what is not. Still, this is a net positive. I take the stuff it gives me, discard the things I disagree on, and at least I have a partial summary. I generally check everything it spits out against the original and ask it to cite the original sources, so I don't end up with hallucinated facts. It's less time than writing up a summary myself, and it's the kind of work that I find more enjoyable than typing summaries.
Still, the hit to miss ration is good enough and the time savings on the hits are impressive so I continue to use it in various situations where I need a summary or I need it to direct my attention to something.
I really don't see how it can save you time if you have to summarize the same source for yourself every time in order to learn whether the AI did a good job in this particular case.
There is a difference between creating a summary and double-checking if a summary is correct. It's kind of like coding something, or doing a code review. It does save me time, but that might have to do with how I think and what I enjoy doing. Everyone is different.
Funny to me how many of the replies to this comment are assuming you mean all these "punk" startups will be possible because of AI, when your actual comment says nothing of the sort.
Not to mention that there's already a small market for software products that work just like existing products that were once good, only without all the AI getting in your way at every turn. You're just not going to be making huge enterprise sales with such a product (in 2025).
Unfortunately, "X is just a tool and is super useful when used properly, all things are both bad in excess and good in moderation, what you gonna do?" is exactly the type of conclusion that a chat bot is likely to reach. Doesn't really say anything, appears to express sophistication and wisdom by being more "nuanced" than an actual position, demands nothing of your audience, not likely to get downvotes on social media, etc.