It's also, for example, the studies finding that when companies adopt AI employees' jobs get worse. More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.
$20/month in return for measurable reductions in quality of life is not an amazing deal. It's "Heads I win, tails you lose."
Or maybe, if you're thinking of it as an enabler for a side hustle or some other project with a low probability of a high payoff, it can slightly more optimistically be regarded as a moderately expensive lottery ticket.
That's not pessimism; it's just a realistic understanding of how the tech industry actually works, informed by decades' worth of experience.
> It's also, for example, the studies finding that when companies adopt AI employees' jobs get worse. More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.
Can you share those studies? I'm pretty skeptical of this effect. I find that AI has made my job easier and less stressful.
In general, I think your atittude is not realistic, it's just general pessimism about the world ("everything new is bad") that is basically unfounded.
>More multitasking, more overtime, more burnout, more skills you're expected to learn (on your own time if necessary), more interpersonal conflict among colleagues. And this is not being offset by anything tangible like an increase in pay.
Similar things happened with the adoption of computers in the workplace. Perhaps there's a case for banning all digital technology and hiring typists and other assistants to perform the work using typewriters and mechanical calculators? There would certainly be less multitasking when you have 8 hours worth of documents to retype and file/mail. Perhaps there would be less overtime when your boss can see you have a high workload by the state of papers piled upon your desk. Or maybe we can solve these problems in a different way.
I have found that it works well as an open-endlessly dynamic process when you are doing the kind of work that the people who came up with Scrum did as their bread and butter: limited-term contract jobs that were small enough to be handled by a single pizza-sized team and whose design challenges mostly don’t stray too far outside the Cynefn clear domain.
The less any of those applies, the more costly it is to figure it out as you go along, because accounting for design changes can become something of a game of crack the whip. Iterative design is still important under such circumstances, but it may need to be a more thoughtful form of iteration that’s actively mindful about which kinds of design decisions should be front-loaded and which ones can be delayed.
You definitely need limits around it. Especially as a consulting team. It's not for open ended projects, and if you use it for open ended projects as a consultant you're in for a world of hurt. On the consultant side, hard scope limits are a must.
And I completely agree that requirement proximity estimation is a critical skill. I do think estimation of requirement proximity is a much easier task than time estimates.
But also, based on what I have heard of their headcount, they are not necessarily saving any money by vibecoding it - it seems like their productivity per programmer is still well within the historical range.
That isn’t necessarily a hit against them - they make an LLM coding tool and they should absolutely be dogfooding it as hard as they can. They need to be the ones to figure out how to achieve this sought-after productivity boost. But so far it seems to me like AI coding is more similar to past trends in industry practice (OOP, Scrum, TDD, whatever) than it is different in the only way that’s ever been particularly noteworthy to me: it massively changes where people spend their time, without necessarily living up to the hype about how much gets done in that time.
Just based on skimming the transcript, it sounds like it wasn't a D&D campagin, it was actually a roguelike CRPG that he vibecoded in Claude Code.
Still, he mentions some gross things, like how he "got bullied into" adding a game mechanic he didn't want by the LLM. It kept adding it, without being asked, and he finally just got tired of taking it back out again. The mechanic in question was a leveling system, so I imagine the LLM kept adding it because that's such a standard-issue element of dungeon crawler games. Which speaks to my main source of pessimism about AI in games: LLMs have a tendency to want to do standard, middle-of-the-road things, and will tend to fight you every step of the way when you try to involve them in an attempt to do something new and different.
But I imagine you'd run into a similar thing with D&D campaigns. Which, if true, raises the question: why would I need an LLM to generate Dragonlance and Forgotten Realms style quests when back issues of Dungeon magazine already give me more of that kind of material than I could reasonably get through in a lifetime, and probably in a much more polished form?
In earlier text adventures (e.g., Infocom games), some portion of those constraints were due to the authors failing to anticipate legitimate ways that users would try to phrase things and account for them in the game. But that's not nearly such a problem in anything made since the late '90s, especially if you stick to xyzzy award winners.
The more essential reason for that constraint is that it's just good storytelling. The author of a work of IF has an idea they want to explore. That main idea could be narrative (Photopia or Anchorhead), or it could be a gameplay mechanic (Savoir-Faire or Counterfeit Monkey). But in any case, if your goal is to appreciate the creator's vision, those constraints are critical because they telegraph to you, the player, what you should and should not be exploring.
This isn't an idea that's specific to text adventures, either. The creators of the Outer Wilds deliberately made areas flat and boring when there wasn't anything there for the player to do to advance the story, specifically because they didn't want players wasting time on exploration that would ultimately prove to be pointless. This is also why open world games that do go for a more uniformly detailed world also need to hand-hold the player and tell them where they need to go every step of the way. Without that players would tend to get lost, lose their sense of progress, and ultimately end up bored.
I think that, because of this dynamic, using AI to flesh out the unimportant bits of the game would be a cardinal game design sin. Making bloat cheap and easy does not make it good. I just makes more of it.
I also live in Chicago. The closest bus stop to my house is 2 blocks away, and the 2nd closest stop on that same line is 3 blocks away - just one block further in the direction I’m going.
I simply don’t believe that eliminating that closet stop would worsen my commute. When I’m leaving home, I would walk a block further, but probably 80+% of the time it would not increase the time I spend out in the elements because I’d just replace time standing at the bus stop with time walking to the next one. The only time it would hurt me is on the rare occasion that the bus passes me while I’m walking that extra block. (Pessimistically assuming 2 minutes to walk one block, and with buses coming every 10 minutes on average, is how I get 80%.) But I bet doing that all up and down the route would make the bus much more predictable. That closest stop is within the distance that cars back up from a traffic light at that next intersection when there’s traffic, and when the bus stops at my intersection it can often get pinned in the stop for a while when motorists aren’t in the mood to let the bus re-enter traffic. Multiply that phenomenon by, say, 20 extra stops and you get to some pretty unreliable service for people trying to get to work in the morning. I bet most of us would happily walk an extra block if it means we no longer have to leave for work half an hour early. 2 minutes extra walking on either end adds up to 4 minutes “wasted” time walking (I also am not sure I count walking as wasted time, by the way - physical activity is good for me) is a lot less than 30 minutes wasted time padding my commute to account for less reliable service.
And then when I’m coming home I get off at that stop that’s a block further away anyway. Because there’s a light at that intersection but not at the one where the close stop lies. I can easily spend more time waiting for a gap in traffic large enough to cross a busy street during the evening rush than it takes to walk that extra block.
I’ve been a consistent split keyboard user for a quarter century now. My current daily driver is a Redox, which uses a columnar layout. I got into them when I first started having problems with tendinitis. I feel like they help, but I’m not sure what the science says about it.
Anyway, I’ve always hated that diagram because it’s so obviously hyperbolic. I also use standard keyboards on a daily basis, and while there are some posture differences, the bending to make hands perpendicular to the keyboard just does not happen. Comfortably placing your fingers on the home row requires angling your hands a bit because the fingers are all different lengths. Are there some posture differences? Sure. But from what I’ve seen they’re really quite minor.
What I would guess makes more of a difference is tenting. Which is admittedly only possible with a split design. But also, not all split keyboards do tent.
Also, and this one might be specific to my particular problem, moving keys the thumb strikes to a position that it can reach with less stretching has helped a lot. (I suspect that the space bar in particular might have been the source of most of my woes.) And that’s another variable that’s highly correlated with - but still not the same as - the keyboard being split.
Try things and see for yourself. I know that’s not super satisfying advice, but everyone has a different experience with these things so there are no easy answers.
Start small. Don’t feel pressured to dive straight into the $300 keyboards. I have a fancy custom mechanical keyboard myself, but that’s because a few years back I decided it would be fun to get into using a more hackable keyboard. For a very long time I was more than content with the (sadly now discontinued) Microsoft Sculpt keyboard, which was one of the least expensive options.
For what it’s worth, ObjC is not Apple’s brainchild. It just came along for the ride when they chose NEXTSTEP as the basis for Mac OS X.
I haven’t used it in a couple decades, but I do remember it fondly. I also suspect I’d hate it nowadays. Its roots are in a language that seemed revolutionary in the 80s and 90s - Smalltalk - and the melding of it with C also seemed revolutionary at the time. But the very same features that made it great then probably (just speculating - again I haven’t used it in a couple decades) aren’t so great now because a different evolutionary tree leapfrogged ahead of it. So most investment went into developing different solutions to the same problems, and ObjC, like Smalltalk, ends up being a weird anachronism that doesn’t play so nicely with modern tooling.
I've never written whole applications in ObjC but have had to dabble with it as part of Ardour (ardour.org) implementation details for macOS.
I think it's a great language! As long as you can tolerate dynamic dispatch, you really do get the best of C/C++ combined with its run-time manipulable object type system. I have no reason to use it for more code than I have to, but I never grimace if I know I'm going to have to deal with it. Method swizzling is such a neat trick!
It is, and that’s part of what I loved about it. But it’s also the kind of trick that can quickly become a source of chaos on a project with many contributors and a lot of contributor churn, like we tend to get nowadays. Because - and this was the real point of Dijkstra’s famous paper; GOTO was just the most salient concrete example at the time - control flow mechanisms tend to be inscrutable in proportion to their power.
And, much like what happened to GOTO 40 years ago, language designers have invented less powerful language features that are perfectly acceptable 90% solutions. e.g. nowadays I’d generally pick higher order functions or the strategy pattern over method swizzling because they’re more amenable to static analysis and easier to trace with typical IDE tooling.
I don't really want to defend method swizzling (it's grotesque from some entirely reasonable perspectives). However, it does work on external/3rd party code (e.g. audio plugins) even when you don't have control over their source code. I'm not sure you can pull that off with "better" approaches ...
Many of the built-in types in Objective C all have names beginning with “NS” like “NSString”. The NS stands for NeXTSTEP. I always found it insane that so many years later, every iPhone on Earth was running software written in a language released in the 80s. It’s definitely a weird language, but really quite pleasant once you get used to it, especially compared to other languages from the same time period. It’s truly remarkable they made something with such staying power.
>It’s truly remarkable they made something with such staying power
What has had the staying power is the API because that API is for an operating system that has had that staying power. As you hint, the macOS of today is simply the evolution of NeXTSTEP (released in 1989). And iOS is just a light version of it.
But 1989 is not all that remarkable. The Linux API (POSIX) was introduced in 1988 but started in 1984 and based on an API that emerged in the 70s. And the Windows API goes back to 1985. Apple is the newest API of the three.
As far as languages go, the Ladybird team is abandoning Swift to stick with C++ which was released back in 1979. And of course C++ is just an evolution of C which goes back to 1972 and which almost all of Linux is still written in.
And what is Ladybird even? It is an HTML interpretter. HTML was introduced in 1993. Guess what operating system HTML and the first web browser was created on. That is right...NeXTSTEP.
In some ways ObjC’s and the NEXTSTEP API’s staying power is more impressive because they survived the failure of their relatively small patron organization. POSIX and C++ were developed at and supported by tech titans - the 1970s and 1980s equivalents of FAANG. Meanwhile back at the turn of the century we had all witnessed the demise of NeXT and many of us were anticipating the demise of Apple, and there was no particularly strong reason to believe that a union of the two would fare any better, let alone grow to become one of the A’s in FAANG.
I actually suspect that ObjC and the NeXT APIs played a big part in that success. I know they’ve fallen out of favor now, and for reasons I have to assume are good. But back in the early 2000s, the difference in how quickly I could develop a good GUI for OS X compared to what I was used to on Windows and GNOME was life changing. It attracted a bunch of developers to the platform, not just me, which spurred an accumulation of applications with noticeably better UX that, in turn, helped fuel Apple’s consumer sentiment revival.
Good take. Even back in the 1990s, OpenStep was thought to be the best way to develop a Windows app. But NeXT charged per-seat licenses, so it didn't get much use outside of Wall Street or other places where Jobs would personally show up. And of course something like iPhone is easier when they already had a UI framework and an IDE and etc.
Assuming you mean C (C++ is an 80s child), that’s trivially true because devices with an ObjC SDK are a strict subset of devices that are running on C.
Yes, that is why I don't find it "insane" like the grandparent does, like yeah, devices run old languages because those languages work well for their intended purpose.
You should feel that C’s longevity is insane. How many languages have come and gone in the meantime? C is truly an impressive language that profoundly moved humanity forward. If that’s not insane (used colloquially) to you, then what is?
Next was more or less an Apple spinoff, that was later acquired by Apple. Objective-C was created because using standards is contrary to the company culture. And with Swift they are painting themselves into a corner.
> Objective-C was created because using standards is contrary to the company culture
Objective-C was actually created by a company called Stepstone that wanted what they saw as the productivity benefits of Smalltalk (OOP) with the performance and portability of C. Originally, Objective-C was seen as a C "pre-compiler".
One of the companies that licensed Objective-C was NeXT. They also saw pervasive OOP as a more productive way to build GUI applications. That was the core value proposition of NeXT.
NeXT ended up basically taking over Objective-C and then it became of a core part of Apple when Apple bought NeXT to create the next-generation of macOS (the one we have now).
So, Objective-C was actually born attempting to "use standards" (C instead of Smalltalk) and really has nothing to do with Apple culture. Of course, Apple and NeXT were brought into the world by Steve Jobs
> Objective-C was created because using standards is contrary to the company culture.
What language would you have suggested for that mission and that era? Self or Smalltalk and give up on performance on 25-MHz-class processors? C or Pascal and give up an excellent object system with dynamic dispatch?
C's a great language in 1985, and a great starting point. But development of UI software is one of those areas where object oriented software really shines. What if we could get all the advantages of C as a procedural language, but graft on top an extremely lightweight object system with a spec of < 20 pages to take advantage of these new 1980s-era developments in software engineering, while keeping 100% of the maturity and performance of the C ecosystem? We could call it Objective-C.
And, hear me out here - perhaps for the sake of morale it makes sense to leave a smidge of the part of the job that actually attracts people to this profession in the first place on their plates. Otherwise we may find that, after the novelty wears off, we’re left with a net productivity dropoff because there’s not as much left to keep people motivated to do à good job of the remaining work.
$20/month in return for measurable reductions in quality of life is not an amazing deal. It's "Heads I win, tails you lose."
Or maybe, if you're thinking of it as an enabler for a side hustle or some other project with a low probability of a high payoff, it can slightly more optimistically be regarded as a moderately expensive lottery ticket.
That's not pessimism; it's just a realistic understanding of how the tech industry actually works, informed by decades' worth of experience.
reply