Hacker Newsnew | past | comments | ask | show | jobs | submit | more keeda's commentslogin

Wait, so let me get this straight.

Altman got Disney to pay OpenAI, via an investment, for Sora -- which was likely trained on and used to generate infringements of all kinds of their copyrighted material.

And then Disney sends Google a Cease & Desist for using its copyrighted material, not only restricting what people can do with Google's AI image generators, but which could potentially also force Google to retrain all their models without Disney content.

Very likely Disney will reach a licensing deal with Google, which would conveniently finance Disney's investment in OpenAI.

And all this on the heels of the coup where Altman simultaneously signed a deal with Samsung and SK Hynix to lock up 40% of the world's DRAM supply, effectively cornering a key component for AI training hardware.

As I've said before: All these others are playing Capitalism. Altman out there playing Game of Thrones.

/popcorn


Note I'm not saying one is better than the other, but my takes:

1. The problem solving is in figuring out what to prompt, which includes correctly defining the problem, identifying a potential solution, designing an architecture, decomposing it into smaller tasks, and so on.

Giving it a generic prompt like "build a fitness tracker" will result in a fully working product but it will be bland as it would be the average of everything in its training data, and won't provide any new value. Instead, you probably want to build something that nobody else has, because that's where the value is. This will require you to get pretty deep into the problem domain, even if the code itself is abstracted away from you.

Personally, once the shape of the solution and the code is crystallized in my head typing it out is a chore. I'd rather get it out ASAP, get the dopamine hit from seeing it work, and move on to the next task. These days I spend most of my time exploring the problem domain rather than writing code.

2. Learning still exists but at a different level; in fact it will be the only thing we will eventually be doing. E.g. I'm doing stuff today that I had negligible prior background in when I began. Without AI, I would probably require an advanced course to just get upto speed. But now I'm learning by doing while solving new problems, which is a brand new way of learning! Only I'm learning the problem domain rather than the intricacies of code.

3. Statistically speaking, the people who hire us don't really care about the code, they just want business results. (See: the difficulty of funding tech debt cleanup projects!)

Personally, I still care about the code and review everything, whether written by me or the AI. But I can see how even that is rapidly becoming optional.

I will say this: AI is rapidly revolutionizing our field and we need to adapt just as quickly.


> The problem solving is in figuring out what to prompt, which includes correctly defining the problem, identifying a potential solution, designing an architecture, decomposing it into smaller tasks, and so on

Coding is just a formal specification, one that is suited to be automatically executed by a dumb machine. The nice trick is that the basic semantics units from a programming language are versatile enough to give you very powerful abstractions that can fit nicely with the solution your are designing.

> Personally, once the shape of the solution and the code is crystallized in my head typing it out is a chore

I truly believe that everyone that says that typing is a chore once they've got the shape of a solution get frustrated by the amount of bad assumptions they've made. That ranges from not having a good design in place to not learning the tools they're using and fighting it during the implementation (Like using React in an imperative manner). You may have something as extensive as a network protocol RFC, and still got hit by conflict between the specs and what works.


> I truly believe that everyone that says that typing is a chore once they've got the shape of a solution get frustrated by the amount of bad assumptions they've made.

To a lot of people (clearly not yourself included), the most interesting part of software development is the problem solving part; the puzzle. Once you know _how_ to solve the puzzle, it's not all that interesting actually doing it.

That being said, you may be using the word "shape" in a much more vague sense than I am. When I know the shape of the solution, I know pretty much everything it takes to actually implement it. That also means I've very bad at generating LOEs because I need to dig into the code and try things out, to know what works... before I can be sure I have a viable solution plan.


I understand your point. But what you should be saying is that you have an idea of the correct solution. But the only correct solution is code or a formal proof that it is in fact correct. It’s all wishes and dreams otherwise. If not, we wouldn’t have all of those buffer overflow, one by off errors, and xss vulnerabilities.


All we _ever_ have is an idea of the correct solution. There's no point at which we can ever say "this is the correct solution", at least not for any moderately sized software problem.

That being said, we can say

- Given the implementation options we've found, this solution/direction is what we think is the best

- We have enough information now that it is unlikely anything we find out is going to change the solution

- We know enough about the solution that it is extremely unlikely that there are any more real "problems/puzzles" to be solved

At that point, we can consider the solution "found" and actually implementing it is no more a part of solving it. Could the implemented solution wind up having to deal with an off-by-one error that we need to fix? Sure... but that's not "puzzle solving". And, for a lot of people, it's just not the interesting part.


I think you would be surprised by how much these AIs can "fill in the blanks" based on the surrounding code and high-level context! Here is an example I posted a few months ago (which is coincidentally, related to the reply I just gave the sibling comment): https://news.ycombinator.com/item?id=44892576

Look at the length of my prompt and the length of the code. And that's not even including the tests I had it generate. It made all the right assumptions, including specifying tunable optional parameters set to reasonable defaults and (redacted) integrating with some proprietary functions at the right places. It's like it read my mind!

Would you really think writing all that code by hand would have been comparable to writing the prompt?


I’m not surprised. It would be like being suprised by the favt that computers can generate a human portrait (which has been been a thing before LLMs), but people are still using 3d software because while it takes more time, they have more control over the final result.


We still have complete control over the code, because after the AI generates it, it's right there to tweak as we want!

But the point is, there were no assumptions or tooling or bad designs that had to be fought. Just an informal, high-level prompt that generated the exact code I wanted in a fraction of the time. At least to me that was pretty surprising -- even if it'd become routine for a while by then -- because I'd expect that level of wavelength-match between colleagues who had been working on the same team for a while.


> Coding is just a formal specification

If you really believe this, I'd never want to hire you. I mean, it's not wrong, it's just ... well, it's not even wrong.


I'd still hire them, in fact I see that level of understanding as a green flag.

Your response and depth of reasoning about why you wouldn't hire them is a red flag though. Not for a manager role and certainly not as an IC.


I provided zero depth of reasoning.

Coding is as much a method of investigating and learning about a problem as it is any sort of specification. It is as much play as it is description. Somebody who views code as nothing more than a formal specification that tells a computer what to do is inhibiting their ability to play imaginatively with the problem space, and in the work that I do, that is absolutely critical.


> zero reasoning

Yes.

> inhibiting play

Strongly disagree. The more abstraction layers you can see across, the bigger your toolbox and the more innovative your solutions to problems can be.


Honestly, I fundamentally disagree with this. Figuring out "what to prompt" is not problem-solving in a true sense imo. And if you're really going too deep into the problem domain, what is the point of having the code abstracted?

My comment was based on you saying you don't care about the code and only what it does. But now you're saying you care about the code and review everything so I'm not sure what to make out of it. And again, I fundamentally disagree that reviewing code will become optional or rather should become optional. But that's my personal take.


> Figuring out "what to prompt" is not problem-solving in a true sense

This just sounds like "no true scotsman" to me. You have a problem and a toolkit. If you successfully solve the problem, and the solution is good enough, then you are a problem solver by any definition worth a damn.

The magic and the satisfaction of good prompting is getting to that "good enough", especially architecturally. But when you get good at it - boy, you can code rings around other people or even entire teams. Tell me how that wouldn't be satisfying!


> My comment was based on you saying you don't care about the code and only what it does. But now you're saying you care about the code and review everything so I'm not sure what to make out of it.

I'm not the person you originally replied to, so my take is different, which explains your confusion :-)

However I do increasingly get the niggling sense I'm reviewing code out of habit rather than any specific benefit because I so rarely find something to change...

> And if you're really going too deep into the problem domain, what is the point of having the code abstracted?

Let's take my current work as an example: I'm doing stuff with computer vision (good old-fashioned OpenCV, because ML would be overkill for my case.) So the problem domain is now images and perception and retrieval, which is what I am learning and exploring. The actual code itself does not matter as much the high-level approach and the component algorithms and data structures -- none of which are individually novel BTW, but I believe I'm the only one combining them this way.

As an example, I give a high-level prompt like "Write a method that accepts a list of bounding boxes, find all overlapping ones, choose the ones with substantial overlap and consolidate them into a single box, and return all consolidated boxes. Write tests for this method." The AI runs off and generates dozens of lines of code -- including a tunable parameter to control "substantial overlap", set to a reasonable default -- the tests pass, and when I plug in the method, 99.9% of the times the code works as expected. And because this is vision-based I can immediately verify by sight if the approach works!

To me, the valuable part was coming up with that whole approach based on bounding boxes, which led to that prompt. The actual code in itself is not interesting because it is not a difficult problem, just a cumbersome one to handcode.

To solve the overall problem I have to combine a large number of such sub-problems, so the leverage that AI gives me is enormous.


What people are wary of is not solving the problem in the first pass. They are wary of technical debt and unmaintainable code. The cost of change can be enormous. Software engineering is mostly about solving current problems and laying the foundation to adapt for future ones at the same time. Your approach's only focus is current problems which is pretty much the same as people that copypaste from StackOverflow without understanding.


Technical debt and understanding is exactly why I still review the code.

But as I said, it's getting rare that I need to change anything the AI generates. That's partly because I decompose the problem into small, self-contained tasks that are largely orthogonal and easily tested -- mostly a functional programming style. There's very little that can go wrong because there is little ambiguity in the requirements, which is why a 3 line prompt can reliably turn into dozens of lines of working, tested code.

The main code I deal with manually is the glue that composes these units to solve the larger computer vision problem. Ironically, THAT is where the tech debt is, primarily because I'm experimenting with combinations of dozens of different techniques and tweaks to see what works best. If I knew what was going to work, I'd just prompt the AI to write it for me! ;-)


Hilarious! Kinda reinforces the idea that LLMs are like junior engineers with infinite energy.

But just telling an AI it's a principal engineer does not make it a principal engineer. Firstly, that is such a broad, vaguely defined term, and secondly, typically that level of engineering involves dealing with organizational and industry issues rather than just technical ones.

And so absent a clear definition, it will settle on the lowest common denominator of code quality, which would be test coverage -- likely because that is the most common topic in its training data -- and extrapolate from that.

The other thing is, of course, the RL'd sycophancy which compels it to do something, anything, to obey the prompt. I wonder what would happen if tweaked the prompt just a little bit to say something like "Use your best judgement and feel free to change nothing."


Maybe I'm misinterpreting your point, but this makes it seem that your standard for "intelligence" is "inventing entirely new techniques"? If so, it's a bit extreme, because to a first approximation, all problem solving is combining and applying existing techniques in novel ways to new situations.

At the point that you are inventing entirely new techniques, you are usually doing groundbreaking work. Even groundbreaking work in one field is often inspired by techniques from other fields. In the limit, discovering truly new techniques often requires discovering new principles of reality to exploit, i.e. research.

As you can imagine, this is very difficult and hence rather uncommon, typically only accomplished by a handful of people in any given discipline, i.e way above the standards of the general population.

I feel like if we are holding AI to those standards, we are talking about not just AGI, but artificial super-intelligence.


The unfortunate reality of engineering is that we don't get paid proportional to the value we create, even the superstars. That's how tech companies make so much money, after all.

If you're climbing the exec ladder your pay will scale a little bit better, but again, not 100x or even 10x. Even the current AI researcher craze is for an extremely small number of people.

For some data points, check out levels.fyi and compare the ratio of TCs for a mid-level engineer/manager versus the topmost level (Distinguished SWE, VP etc.) for any given company.


The whole premise of YCombinator is that it’s easier to teach good engineers business than to teach good business people engineering skills.

And thus help engineers get paid more in line with their “value”. Albeit with much higher variance.


I would agree with that premise, but at that point they are not engineers, they are founders! I guess in the end, to capture their full value engineers must escape the bonds of regular employment.

Which is not to say either one is better or worse! Regular employment does come with much lower risk, as it is amortized over the entire company, whereas startups are risky and stressful. Different strokes for different folks.

I do think AI could create a new paradigm though. With dropping employment and increasing full-stack business capabilities, I foresee a rise in solopreneurship, something I'm trying out myself.


I think you're parsing the original claim incorrectly. "Advanced software teams" does not mean teams who write advanced software, these are software teams that are advanced :-)


My understanding is prior work in NLP and symbolic AI was strongly influenced by Chomsky's philosophies, e.g. https://chomsky.info/20121101/ and https://norvig.com/chomsky.html


I feel like this bubble actually has two bubbles happening:

1. The infra build out bubble: this is mostly the hypescalers and Nvidia.

2. The AI company valuation bubble: this includes the hyperscalers, pure-play AI companies like OpenAI and Anthropic, and the swarm of startups that are either vaporware or just wrappers on top of the same set of APIs.

There will probably be a pop in (2), especially the random startups that got millions in VC funding just because they have a ".ai" in their domain name. This is also why the OpenAI and Anthropic are getting into the infra game by trying to own their own datacenters, that may be the only moat they have.

However, when people talk about trillions, it's mostly (1) that they are thinking of. Given the acceleration of demand that is being reported, I think (1) will not really pop, maybe just deflate a bit when (2) pops.


OK, that is interesting. Separating infra from AI valuation. I can see what you mean though because stock prices are volatile and unpredictable but a datacenter will remain in place even if its owner goes bankrupt.

However, I think the AI datacenter craze is definitely going to experience a shift. GPU chips get obsolete really fast, especially now that we are moving into specialised neural chips. All those datacenters with thousands of GPUs will be outcompeted by datacenters with 1/4th the power demand and 1/10th the physical footprint due to improved efficiency within a few years. And if indeed the valuation collapses and investors pull out of these companies, where are these datacenters supposed to go? Would you but a datacenter chock full of obsolete chips?


Right, the obsolence rate of GPUs is one of the primary drivers of the depreciation shenanigans aspect of the bubble.

However, I've come across a number of articles that paint a very different picture. E.g. this one is from someone in the GPU farm industry and is clearly going to be biased, but by the same token seems to be more knowledgeable. They claim that the demand is so high that even 9-year old generations still get booked like hot cakes: https://www.whitefiber.com/blog/understanding-gpu-lifecycle


> They claim that the demand is so high that even 9-year old generations still get booked like hot cakes

What does this prove? Demand is inflated in a bubble. If the AI company valuation bubble pops, demand for obsolete GPUs will evaporate.

The article you're linking here doesn't say what percentage of those 9-year-old GPUs already failed, nor does it say when they were first deployed, so it's hard to draw conclusions. In fact their math doesn't seem to consider failure at all, which is highly suspicious.

In another subthread, you pointed to the top comment here about a 5-year MTBF as supposedly contradicting the original article's thesis about depreciation. 5 years is obviously less than the 9 years here, so clearly something doesn't add up. (Besides, a 5-year MTBF is rather poor to begin with, and there isn't normally a correlation between depreciation and MTBF. So this is not a smoking gun which contradicts anything in Tim Bray's original article.)


> Demand is inflated in a bubble.

Is it? The dot-com fiber bubble for instance was famous for laying far more fiber than would be needed for the next decade even as the immediate organic demand was tiny.

In this case however, each and every hyperscaler is bemoaning / low-key boasting that they have been capacity constrained for the past multiple quarters.

The other data point is the climbing rate of AI adoption as reported by non-AI affiliated sources, which also lines up with what AI companies report, like:

https://www.stlouisfed.org/on-the-economy/2025/nov/state-gen...

That article is a little crazy. Not only are 54% of Americans using AI, that is 10 percentage points over last year... and usage at work may even be boosting national-level metrics!

> In fact their math doesn't seem to consider failure at all, which is highly suspicious.

That's a good point! If I had to guess, that may be because Burry et al don't mention failure rates either, and seem to assume a ~2 year obsolescence based on releases of new generations of GPUs.

As such, everybody is responding to those claims. The article I linked was making the point that even 9-year old generations are still in high demand, which also explains the 5 years vs 9 years difference -- two entirely different generations and models, H100 vs M4000.

And while MTBF is not directly related to depreciation, it's Bray who brings up failure rates in a discussion about depreciation. This is one reason I think he's just riffing off what he's heard rather than speaking from deep industry knowledge.

I've been trying to find any discussion that mentions concrete failure rates without luck. Which makes sense, since they're probably heavily-NDA'd numbers.


> Is it?

Yes, demand is absolutely inflated in a bubble. We're talking about GPUs, so look at hardware sales for the comparison, not utility infrastructure. Sun Microsystems' revenue during and after the dotcom bubble, for example. Or Cisco's, for a less extreme but still informative case.

> it's Bray who brings up failure rates in a discussion about depreciation

Yes, I understood his point to be that depreciation schedules for GPUs are overly optimistic (too long) while their MTBF is unusually low. Implying what is on the books as assets may be inflated compared to previous normal practices in tech.

In any case, at this point I agree with the other commenter who said you're just trying to confirm your existing opinion, so not really much sense in continuing this discussion.


I think the point is simply that "firing" indicates wrongdoing or lack of performance by the workers, whereas "layoffs" indicates the company unilaterally decided to remove them. It may sound nitpicky, but it is pertinent to your point that people are losing their livelihood through no fault of their own.


1. Being a VP in these companies does not imply they have an understanding of financing, accounting or data-center economics unless their purview covered or was very close to the teams procuring and running the infrastructure.

2. That level of seniority does, on the other hand, expose them to a lot of the shenanigans going on in those companies, which could credibly lead them to develop a "big tech bad" mindset.


So on the one side, we have a famous widely-respected 70-year-old software engineer with a lengthy wikipedia bio and history of industry-impactful accomplishments. His statements on depreciation here are aligned with things that have been discussed on HN numerous times from my recollection; here are a couple discussions that I found quickly: https://news.ycombinator.com/item?id=29005669 (search "depreciation"), https://news.ycombinator.com/item?id=45310858

On the other side, we have the sole comment ever by a pseudonymous HN user with single-digit karma.

Personally I'll trust the former over the latter.


How do his accomplishments (numerous as they may be) matter if they are only tangentially related to the topic he's discussing? The fact that his take aligns with many others' does not help if they are all outsiders ruminating on hearsay and innuendo about tightly guarded, non-public numbers. He may well simply be echoing the talking points he has heard.

I mean, per the top comment in this thread, he cites an article -- the only source with official, concrete numbers -- that seems to contradict his thesis about depreciation: https://news.ycombinator.com/item?id=46208221

I'm no expert on hardware depreciation, but the more I've dug into this, the more I'm convinced people are just echoing a narrative that they don't really understand. Somewhat like stochastic parrots, you could say ;-)

My only goal here is to get a real sense of the depreciation story. Partially out of financial interest, but more because I'm really curious about how this will impact the adoption of AI.


From what I've seen, folks at that level have massive amounts of insider knowledge, a huge network of well-informed connections at every big tech company, and a stock portfolio in the 8 to 9 figure range. Personally I would strongly doubt he's simply aligning with "outsiders ruminating on hearsay and innuendo", but I guess believe what you'd like!


> My only goal here is to get a real sense of the depreciation story

It doesn't look like your goal is to get a real sense or at least your strategy is really poor as you have an opinion already and wants to confirm it


It won't be great for my financial interests, even though partial, if my decisions were based on presupposed notions rather than evidence ¯\_(ツ)_/¯


Sure, you might not find someone here willing to bring you some evidence as we are all busy but are you confident we are not in a bubble?


I really do not want to impose on anyone, but I do want different perspectives so I appreciate all these discussions!

I think we are actually in two bubbles -- see sibling thread: https://news.ycombinator.com/item?id=46211400 -- 1) AI infra + hyperscalers, and 2) pure-play AI company / startup valuations.

My take is only the second one will pop and cause a temporary deflation in the first one, and the GPU depreciation story is going to influence when it happens, how painful it will be, and how long it will last.

However, I'm convinced this will be a temporary blip. By all the data points I can find -- academic studies, quarterly earnings, government reports, industry surveys, not to mention my daily experiences with AI -- the AI growth story looks very real and very unstoppable. To the extent I'm concerned for my kids' prospects when they say they're interested in software engineering!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: