How often are you actually doing this though?
I think I probably work in something greenfield about once a decade. The hard part is always going down a rabbit hole in established code bases. I can do the boilerplate in a few days. It saves time, but not really even one hairy issue a year.
> The hard part is always going down a rabbit hole in established code bases.
Actually, I found that this is exactly where they shine (I wouldn't trust them with greenfield implementation myself). Exploring existing code is so much easier when you can ask them how something works. You can even ask them to find problems - you can't trust them to be correct, of course, but at least you get some good brainstorming going. And, incredibly, they often do find actual problems in the code. Pretty impressive for language models.
Nowadays? 4+ times a week. I want to learn as much as I can now that I essentially have 24/7 mentors that can remember everything I've told them.
Sure, I could write it all by hand; but even as a decent typer, I'll never match a tenth speed of claude code or opencode just GOING. Maybe there's a better way to learn, but whatever it is, it's not obvious to me.
I actually felt like I learned the most when I stopped going to Google and StackOverflow for solutions and instead moved to docs. It's far less direct but the information is much more rich. All that auxiliary information compounds. I want to skip it, feeling rushed to get an answer, but I've always been the better for taking the "scenic route". I'd then play around and learn how to push functions and abuse them. Boy there's no learning like learning how to abuse code.
Fwiw, I do use LLMs, but they don't write code for me. They are fantastic rubber ducky machines. Far better than my cat, which is better than an actual rubber duck. They aid in docs too, helping fill in that space when you don't exactly understand them. But don't let them do the hard work nor the boring work. The boring work is where you usually learn the most. It's also the time you don't recognize that's happening
Close to 5 years. I read docs too and love the immersion and the fully grasping of concepts when going with your route, but most days there's just not enough hours for this.
> The boring work is where you usually learn the most. It's also the time you don't recognize that's happening
That was always how I did it before mid-2025. And I do still do boring work when I truly want to master something, but doing that too much just means (for me) not finishing anything.
5 years isn't that long. I've been doing 3X that and I'm constantly learning new things. Not even about new language features but even languages I've been using that whole time. New ways to problem solve. New algorithms. New tools.
Not finishing things can be okay but also not. An important skill to learn is what's good enough. And to write good enough to be easily upgradable. It's important to write code to be flexible for this reason. It's also important to realize it's okay to completely throw away code. But also this is the reason comments are so important. Don't just write what functions do but also write how you envision the design. Even if you can't get to it now. Then when you or anyone else comes back (after lunch, next week, next year, whenever) there's good hints about all that. Knowing how to get up to speed and be effective fast. If anything this helps agents even more. Commenting is a vastly under appreciated skill and only becoming more valuable
Every category I can think of where China is near-first there is some international manufacturer that has a better product.
Several areas where there are much higher volumes or outstandingly better value though. Things like automotive lidar, construction assemblys (like double glazed window units), consumer electronics like quadcopters.
I recently bought a handheld spectrophotometer for work (color assessment). The product from the leading US company (X-Rite) is ~US$15k in my market. I bought the Chinese equivalent for US$3k. Maybe if I needed guaranteed nine 9s of colour accuracy the US product would be worth it, but for 95% of users in the market, the Chinese product is more than fine.
I have a vague theory that China's massive home market of poorer people keeps the innovation going. There's always an upside for making something 1% cheaper and simpler as more people can buy it.
That gets mocked by rich people in rich countries in the short term but then leads to disruptive innovation from below, cheaper, simpler items growing and eating the market.
I think you are on to something. In the US I feel the focus is more and more on catering to the maybe top 20% who can afford to pay a lot more for things. There are less and less low end cars. Concerts and sports events are super expensive. New apartments are usually in the higher price range. No starter homes anymore. Instead of innovating, we just increase the price of assets.
The big lidars are for ground truth collection. They get used in projects ranging from autonomous development all the way down to budget adaptive cruise control or parking sensor benchmarking.
Isn't that risk balanced by a healthy reward of controlling their verticals and possible secret sauce?
And their chips give "1600 sparse INT8 TOPS" vs the Orin's "more than 1,000 INT8 TOPS" -- so comparable enough? And going forward they can tailor it to exactly what they want?
Orin is Nvidia's last generation. Current gen is Thor at 1k TOPS. Rivian's announcement specifies TOPS at the module level. The actual chip is more like 800 and probably doubled. Throw two Thors on a similar board and you're looking at 2000 sparse int8 TOPS.
I've been involved with similar efforts on both sides before. Making your own hardware is not a clear cut win even if you hit timelines and performance. I wish them luck (not least because I might need a new job someday), but this is an incredibly difficult space.
Mostly it costs hundreds of millions to develop a chip; it relies on volume to recover the cost.
NVIDIA also tailor their chips to customers. It's a more scalable platform than their marketing hints at... Not to mention that they also iterate fairly quickly.
So far anyway, being on a specialised architecture is a disadvantage; it's much easier to use the advances that come from research and competitors. Unless you really think that you are ahead of the completion, and can sell some fairly inflexible solution for a while.
And today only fools pay the 10K one time cost. Tesla even priced the monthly amount to encourage you to go monthly. There's lots of reasons, including that they're not going to be able to upgrade people who got cars with the previous hardware, so endless lawsuits trying to get a promised but never provided upgrade from 3 to 4.
I don't think the throughput of a general purpose device will make a competitive offering; so being local is a joke. All the fun stuff is running on servers at the moment.
From there, AI integration is enough of a different paradigm that the existing apple ecosystem is not a meaningful advantage.
Best case Apple is among the fast copies of whoever is actually innovative, but I don't see anything interesting coming from apple or apple devs anytime soon.
People said the same things about mobile gaming [1] and mainframes. Technology keeps pushing forward. Neural coprocessors will get more efficient. Small LLMs will get smarter. New use-cases will emerge that don't need 160IQ super-intellects (most use-cases even today do not)
The problem for other companies is not necessarily that data center-borne GPUs aren't technically better; its that the financials might never make sense, much like how the financials behind Stadia never did, or at least need Google-levels of scale to bring in advertising and ultra-enterprise revenue.
> All the fun stuff is running on servers at the moment.
With "Apple Intelligence" it looks like Apple is setting themselves up (again) to be the gatekeeper for these kind of services, "allow" their users to participate and earn a revenue share for this, all while collecting data on what types of tasks are actually in high-demand, ready to in-source something whenever it makes economic sense for them...
Outside of fun stuff there is potential to just make chat another UI technology that is coupled with a specific API. Surely smaller models could do that, particularly as improvements happen. If that was good enough what would be the benefit of an app developer using an extra API? Particularly if Apple can offer an experience that can be familiar across apps.
Also why would you want it sucking your battery or heating your room when a data center is only 20 milliseconds away and it's nothing more than a few kilobytes of text. It makes no sense for the large majority of users' preferences which downweight privacy and the ability to tinker.
It depends; it feels like in some categories the premium between a material that's very suitable, and some ersatz lookalike is massive and depressing.
I love a good petrochemical, but sometimes it would be nice if the cheap thing store wasn't so callously targeting veneers and pleathers that last just long enough to loose the receipt.
One state in India is particularly strong with its labour centric protectionism. As far as I have seen, most families' earnings come from one family member working in "Gulf". The labour unions there are a big reason why industry hardly ever takes root. One example: https://x.com/Bharatiyan108/status/1948757576427901138
I don't foresee any Bluetooth need either for my desktop setup. But yeah I do see that many buyers would want that for headphones if nothing else, so it makes sense to include the chip.