Football is so unique in that the way it’s presented makes it almost impossible to understand what’s going on. There are a million rules, which even die-hard fans don’t understand. And the broadcast doesn’t even make an attempt to explain or even show the offensive or defensive formations and plays being chosen.
It feels like what we’re shown on tv is a very narrow slice of what’s going on. We see the ball moving down the field but have no idea what the coach or quarterback is doing. Somehow it’s still an incredible watch though.
What would the average software engineer pay for a AI coding subscription, as compared to not having any at all? Running a survey on that question would give some interesting results.
I may be a bit of an anomaly since I don't really do personal projects outside of work, but if I'm spending my own money then $0. If the company is buying it for me, whatever they are willing to pay but anything more than a couple hundred/month I'd rather they just pay me more instead or hire extra people.
I would pay at least 300$/month just for hobby projects. The tools are absolutely amazing at things I am the worst at: getting a good overview on a new field/library/docs, writing boilerplate and first working examples, dealing with dependencies and configurations etc. I would pay that even if they never improve and never help to write any actual business logic or algorithms.
Simple queries like: "Find a good compression library that meets the following requirements: ..." and then "write a working example that takes this data, compresses it and writes it to output buffer" are worth multiple hours I would otherwise need to spend on it.
If I wanted to ship commercial software again I would pay much more.
I pay $20 for ChatGPT, I ask it to criticize my code and ideas. Sometimes it's useful, sometimes it says bullshit.
For a few months I used Gemini Pro, there was a period when it was better than OpenAI's model but they did something and now it's worse even though it answers faster so I cancelled my Google One subscription.
I tried Claude Code over a few weekends, it definitely can do tiny projects quickly but I work in an industry where I need to understand every line of code and basically own my projects so it's not useful at all. Also, doing anything remotely complex involves so many twists I find the net benefit negative. Also because the normal side-effects of doing something is learning, and here I feel like my skills devolve.
I also occasionally use Cerebras for quick queries, it's ultrafast.
I also do a lot of ML so use Vast.ai, Simplepod, Runpod and others - sometimes I rent GPUs for a weekend, sometimes for a couple of months, I'm very happy with the results.
If Kubernetes didn't in any way reduce labor, then the 95% of large corporations that adopted it must all be idiots? I find that kinda hard to believe. It seems more likely that Kubernetes has been adopted alongside increased scale, such that sysadmin jobs have just moved up to new levels of complexity.
It seems like in the early 2000s every tiny company needed a sysadmin, to manage the physical hardware, manage the DB, custom deployment scripts. That particular job is just gone now.
Kubernetes enabled qualities small companies didn't dream before.
I can implement zero downtime upgrades easily with Kubernetes. No more late-day upgrades and late-night debug sessions because something went wrong, I can commit any time of the day and I can be sure that upgrade will work.
My infrastructure is self-healing. No more crashed app server.
Some engineering tasks are standardized and outsourced to the professional hoster by using managed serviced. I don't need to manage operating system updates and some component updates (including Kubernetes).
My infrastructure can be easily scaled horizontally. Both up and down.
I can commit changes to git to apply them or I can easily revert them. I know the whole history perfectly well.
I would need to reinvent half of Kubernetes before, to enable all of that. I guess big companies just did that. I never had resources for that. So my deployments were not good. They didn't scale, they crashed, they required frequent manual interventions, downtimes were frequent. Kubernetes and other modern approaches allowed small companies to enjoy things they couldn't do before. At the expense of slightly higher devops learning curve.
You’re absolutely right that sysadmin jobs moved up to new levels of complexity rather than disappeared. That’s exactly my point.
Kubernetes didn’t democratise operations, it created a new tier of specialists. But what I find interesting is that a lot of that adoption wasn’t driven by necessity. Studies show 60% of hiring managers admit technology trends influence their job postings, whilst 82% of developers believe using trending tech makes them more attractive to employers. This creates a vicious cycle: companies adopt Kubernetes partly because they’re afraid they won’t be able to hire without it, developers learn Kubernetes to stay employable, which reinforces the hiring pressure.
I’ve watched small companies with a few hundred users spin up full K8s clusters when they could run on a handful of VMs. Not because they needed the scale, but because “serious startups use Kubernetes.” Then they spend six months debugging networking instead of shipping features. The abstraction didn’t eliminate expertise, it forced them to learn both Kubernetes and the underlying systems when things inevitably break.
The early 2000s sysadmin managing physical hardware is gone. They’ve been replaced by SREs who need to understand networking, storage, scheduling, plus the Kubernetes control plane, YAML semantics, and operator patterns. We didn’t reduce the expertise required, we added layers on top of it. Which is fine for companies operating at genuine scale, but most of that 95% aren’t Netflix.
All this is driven by numbers. The bigger you are, the more money they give you to burn. No one is really working solving problems, it's 99% managing complexity driven by shifting goalposts. Noone wants to really build to solve a problem, it's a giant financial circle jerk, everybody wants to sell and rinse and repeat z line must go up. Noone says stop because at 400mph hitting the breaks will get you killed.
People really look through rose-colored glasses when they talk about late 90s, early 2000s or whenever is their "back then" when they talk about everything being simpler.
Everything was for sure simpler, but also the requirements and expectations were much, much lower. Tech and complexity moved forward with goal posts also moving forward.
Just one example on reliability, I remember popular websites with many thousands if not millions of users would put an "under maintenance" page whenever a major upgrade comes through and sometimes close shop for hours. If the said maintenance goes bad, come tomorrow because they aren't coming up.
Proper HA, backups, monitoring were luxuries for many, and the kind of self-healing, dynamically autoscaled, "cattle not pet" infrastructure that is now trivialized by Kubernetes were sci-fi for most. Today people consider all of this and a lot more as table stakes.
It's easy to shit on cloud and kubernetes and yearn for the simpler Linux-on-a-box days, yet unless expectations somehow revert back 20-30 years, that isn't coming back.
> Everything was for sure simpler, but also the requirements and expectations were much, much lower.
This. In the early 2000s, almost every day after school (3PM ET) Facebook.com was basically unusable. The request would either hang for minutes before responding at 1/10th of the broadband speed at that time, or it would just timeout. And that was completely normal. Also...
- MySpace literally let you inject HTML, CSS, and (unofficially) JavaScript into your profile's freeform text fields
- Between 8-11 PM ("prime time" TV) you could pretty much expect to get randomly disconnected when using dial up Internet. And then you'd need to repeat the arduous sign in dance, waiting for that signature screech that tells you you're connected.
- Every day after school the Internet was basically unusable from any school computer. I remember just trying to hit Google using a computer in the library turning into a 2-5 minute ordeal.
But also and perhaps most importantly, let's not forget: MySpace had personality. Was it tacky? Yes. Was it safe? Well, I don't think a modern web browser would even attempt to render it. But you can't replace the anticipation of clicking on someone's profile and not knowing whether you'll be immediately deafened with loud (blaring) background music and no visible way to stop it.
I worked at an ISP in 1999 and between 8-11 PM we would simply disconnect the longest connected user once the phone banks were full. Obviously we oversubscribed.
I wonder how universal these stages are. All I can say is when I worked at a 15 person company, it was extremely clear to me that we needed more structure than "everyone reports to the CEO". We struggled to prioritize between different projects, milestones weren't clearly defined or owned, at times there would be long debates on product direction without a clear decisionmaker, etc etc.
Not to say the article is so wrong. I think their advice to consider elevating a few engineers into informal tech leads is a great answer. We went with the path of hiring one dedicated "manager" of all engineers and that worked pretty well too.
Depends team to team and founder to founder. I've seen early stage startups where most ICs were able to self manage, but others where some form of structure was needed. At the stage that you mentioned, it's natural for founders to end up hiring an Engineering Lead.
> consider elevating a few engineers into informal tech leads
It is potentially risky - I've seen plenty of talented engineers flounder because they were thrust into an ill-suited management role too soon, but I think if someone is motivated and eased into the role they tend to be superior to an outside hire.
I wonder what motivates apple to release features like RDMA which are purely useful for server clusters, while ignoring basic qol stuff like remote management or rack mount hardware. It’s difficult to see it as a cohesive strategy.
Makes one wonder what apple uses for their own servers. I guess maybe they have some internal M-series server product they just haven’t bothered to release to the public, and features like this are downstream of that?
> I guess maybe they have some internal M-series server product they just haven’t bothered to release to the public, and features like this are downstream of that?
Or do they have some real server-grade product coming down the line, and are releasing this ahead of it so that 3rd party software supports it on launch day?
I worked on some of the internal server hardware. Yes they do have their own boards. Apple used to be all-in on Linux, but the newer chips are far and away more power-efficient, and power is one of the (if not the) major cost of outfitting a datacenter, at least over time.
These machines are very much internal - you can cram a lot of M-series (to use the public nomenclature) chips onto a rack-sized PCB. I was never under the impression they were destined for anything other than Apple datacenters though...
As I mentioned above, it seems to me there's a couple of feature that appeared on the customer-facing designs that were inspired by what the datacenter people wanted on their own PCB boards.
Apple's OS builds are a lot more flexible than most people give them credit for. That's why essentially the same OS scales from a watch to a Mac Pro. You can mix and match the ingredients of the OS for a given device pretty much at will, as long as the dependencies are satisfied. And since you own the OS, dependencies are often configurable.
That they sell to the public? No way. They’ve clearly given up on server stuff and it makes sense for them.
That they use INTERNALLY for their servers? I could certainly see this being useful for that.
Mostly I think this is just to get money from the AI boom. They already had TB5, it’s not like this was costing them additional hardware. Just some time that probably paid off on their internal model training anyway.
And if the rumors are right -- that hardware SVP John Ternus is next in line for CEO -- I could see a world where the company doubles-down on their specialized hardware vs. services.
They’ve done a dip-in-a-toe thing many times, then gave up.
If I was in charge of a business, and I’m an Apple fan, I wouldn’t touch them. I’d have no faith they’re in it for the long term. I think that would be a common view.
The Mac Studio, in some ways, is in a class of its own for LLM inference. I think this is Apple leaning into that. They didn't add RDMA for general server clustering usefulness. They added it so you can put 4 Studios together in an LLM inferencing cluster exactly as demonstrated in the article.
I honestly forgot they still made the Mac Pro. Amazing that they have these ready to ship on their website. But at a 50% premium over similar but faster Mac Studio models, what is the point? You can't usefully put GPUs in them as far as I know. You'd have to have a different PCIe need to make it make sense.
The M2 Ultra has 32 off-world PCIe lanes, 8 of which are obligated to the SSDs. That leaves only 24 lanes for the 7 slots. That's 8 times less than you'd get from an EPYC, which is the kind of thing a normal user would put in a rack if they did not need to use macos.
The annoying thing is there's no ability to control power (or see system metrics) outside the chassis. With servers and desktop PCs, you can usually tap into power pins and such.
AWS is just used for storage, because it's cheaper than Apple maintaining it, itself. Apple do have storage-datacenter at their campus at least (I've walked around one, it's many many racks of SSD's) but almost all the public stuff is on AWS (wrapped up in encryption) AFAIK.
Apple datacenters are mainly compute, other than the storage you need to run the compute efficiently.
I assume a company like Apple either has custom server boards with tons of unified memory on M series with all the i/o they could want (that are ugly and thus not productized) or just use standard expensive nvidia stuff like everyone else.
It’s quite interesting how „boring“ (traditionally enterprise?) their backend looks on the occasional peeks you get publicly. So much Apache stuff & XML.
All facts in this post. FB management always had such a shockingly different tone than other big tech companies. It felt like a bunch of friends who’d been there from the start and were in a bit over their heads with way too much confidence.
I have a higher opinion of zuck than this though. He nailed a couple of really important big picture calls - mobile, ads, instagram - and built a really effective organization.
The metaverse always felt like the beginning of the end to me though. The whole company kinda lived or died by Zuck’s judgement and that was where it just went off the rails, I guess boz was just whispering in his ear too much.
His idea seems to be to detect the approximate timing of a flip or roll with sensors and then strap in and wait for it to happen. I have some serious concerns though lol. I mean if the ball rolled off a cliff on the iceberg and fell into the water I’m pretty sure it would be like trying to survive a crash at terminal velocity, and I doubt the racing chair would handle it.
It's really funny how much better the AI is at writing python and javascript than it is C/C++. For one thing it proves the point that those languages really are just way harder to write. And another thing, it's funny that the AI makes the exact same mistakes a human would in C++. I don't know if it's that the AI was trained on human mistakes, or just that these languages have such strong wells of footguns that even an alien intelligence gets trapped in them.
So in essense I have to disagree with the author's suggestion to vibe code in C instead of Python. I think the python usability features that were made for humans actually help the AI the exact same ways.
There are all kinds of other ways that vibe coding should change one's design though. It's way easier now to roll your own version of some UI or utility library instead of importing one to save time. It's way easier now to drop down into C++ for a critical section and have the AI handle the annoying data marshalling. Things like that are the real unlock in my opinion.
More examples/better models and less footguns. In programming, the fewer (assumed correct) abstractions, the more room for error. Humans learned this awhile ago, which is why your average programmer doesn't remember a lick of ASM, or have to. One of the reasons I don't trust vibe coding lower level languages is that I don't have multiple tools with which to cross check the AI output. Even the best AI models routinely produce code that does not compile, much less account for all side effects. Often, the output outright curtails functionality. It casually makes tradeoffs that a human would not make (usually). In C, AI use is a dangerous proposition.
> I don't know if it's that the AI was trained on human mistakes, or just that these languages have such strong wells of footguns that even an alien intelligence gets trapped in them.
First one. Most of C code you can find out there is either oneliners or shit, there are fewer bigger projects for the LLMs to train on, compared to python and typescript
And once we go to the embedded space, the LLMs are trained on manufacturer written/autogenerated code, which is usually full of inaccuracies (mismatched comments) bugs and bat practices
> It's really funny how much better the AI is at writing python and javascript than it is C/C++. For one thing it proves the point that those languages really are just way harder to write.
I have not found this to be the case. I mean, yeah, they're really good with Python and yeah that's a lot easier, but I had one recently (IIRC it was the pre-release GPT5.1) code me up a simulator for a kind of a microcoded state machine in C++ and it did amazingly well - almost in one-shot. It can single-step through the microcode, examine IOs, allows you to set input values, etc. I was quite impressed. (I had asked it to look at the C code for a compiler that targets this microcoded state machine in addition to some Verilog that implements the machine in order for it to figure out what the simulator should be doing). I didn't have high expectations going in, but was very pleasantly surprised to have a working simulator with single-stepping capabilities within an afternoon all in what seems to be pretty-well written C++.
There are a lot of strong claims that the paper could, but does not make. It never says that Congressional leaders outperform index funds. It just says very specifically that leaders outperform other members of congress. The paper also does not include any clear charts on the actual returns the congressional leaders were getting.
Having done some reading on this myself I don't think it's the case that Congress as a whole outperforms SPY [1]
It feels like what we’re shown on tv is a very narrow slice of what’s going on. We see the ball moving down the field but have no idea what the coach or quarterback is doing. Somehow it’s still an incredible watch though.
reply