Based on my own experience as someone taking the journey from junior developer to CTO of a mid sized company: No, you can't keep that mental model for long.
At first I would write code, which involves a ton of reading and _truly_ understanding code written by others.
Then I would increasingly spend my (technical) time on code reviews. At some point I lost a lot of my intuition about the system, and proper reviews took a long time, I ended up delegating all of that.
Finally, I would mainly talk to middle managers and join high level conversations. I'd still have a high level idea about how everything worked, but kinda lost my ability to challenge what our technical people told me. I made sure to carve out some time to try and stay on top, but I got really rusty.
This was over a time frame of perhaps two or three years. Since then, I've made changes, working at lower levels. I think I got my mojo back, but it took another one or two years of spending ~50% of my day programming.
Other people will be different, but that's how it was for me. To truly understand and memorise something, I need to struggle with it personally. And truly understanding things helps with a lot of higher level work.
But as with anything that takes a few years to materialise, you usually notice it quite a while after the damage is done. Long feedback cycles (like for business decisions, investments into code quality etc) are the root of all evil, IMHO.
As an amateur game programmer, I found nothing too advanced in there. A classic book series would be Game/Programming/GPU Gems, about as old as Max Payne. But frankly, you'd run into most of these concepts attempting to make a 3D game with Godot or something. That's the nice thing to take away from TFA in my opinion: They made a very nice looking game (for the time) with largely pretty simple techniques used cleverly.
A lot of what is shown here comes down to the people in the art asset and level pipeline.
The real story to tell would be what tooling was used to pre-bake the lighing in the textures, e.g. if they used a seperate rendering package or mostly painted by hand, or in what mix.
Also what guidelines they used to make sure the baked-in reflections would match the use and environmental lighing of objects in the scene, e.g. just general constraints or how much customization there was for important unique arrangements. Is it done by the same person in a tight loop or did it involve hand-ofs, etc.
The excellence of the result is down to a lot of tasteful choices in how to blend these techniques, achieved either from experience or iteration.
As programmers we tend to focus a lot on the raw rendering techniques, but there's a whole systematic practice around art direction and how to achieve and maintain quality in it that feels I guess softer and less deterministic but is still worth talking at length about.
This especially struck me when the reviewer here recommended using multiple stacked texture planes and parallax mapping to improve things. I know a handful of games that have done this, and unless used exceedingly sparingly (e.g. mesh fence over pipes or something, where the foreground isn't expected to have a lot of depth) in my experience it very quickly gives away the illusion and looks very hokey. Humans are good at telling it's planes sliding over each other and doesn't correspond to their experience with depth perception. It also makes a scene a lot busier as the camera is moved, firing "something is changing here!" perceptual sinals all over the screen (note how all the lavish particle effects are about feedback instead), and is not the atmosphere Max Payne was trying to achieve.
In other words, sometimes it's about knowing what possible thing not to do, too. And a lot of magic happens when disciplines meet.
> e.g. if they used a seperate rendering package or mostly painted by hand
IIRC Max Payne was one of the earlier games to rely heavily on photo-reference textures (instead of hand drawing or computer generating them). Keep in mind that in 2001 digital cameras were rare, expensive, and low-res, so people often just used film cameras and scanned in the physical photo with a flatbed scanner. Max Payne was far from the first, though; even 1998's Half-Life used some photo-ref textures.
The lighting in Max Payne's textures was probably mostly just the lighting from the original photo. Every texture had to be hand-manipulated to make it usable on 3D models, so changing the lighting would have added even more work and would have looked less realistic.
Macs fundamentally can't be personal computers since they're entirely controlled by apple. Any computer running nonfree software can't be a personal one
So? You can replace the ROM chip (or flash it, if it's an EEPROM). The whole point of free software is that you don't have to limit yourself to what the manufacturer says you can do.
I think the core of the issue is that Mozilla is thinking big. They're not happy to service a niche well (which the majority of the comments on Mozilla related posts is generally asking them to), they want to get back to their glory days, capture the mainstream.
And that is tough. Chrome won because it was an, at the time, superior product, AND because it had an insane marketing push. I remember how it was just everywhere. Every other installer pushed Chrome on you, as well as all the Google properties, it was all over the (tech) news, shaping new standards aggressively etc. Not something Mozilla can match.
But they just won't give up. I don't know if I should applaud that or not, but I think it's probably the core of the disconnect between Mozilla and the tech community. They desperately want to break into the mainstream again, their most vocal supporters want them to get a reality check on their ambitions.
If I was running Mozilla, I'd probably go for the niche. It's less glamorous, but servicing a niche is relatively easy, all you have to do is listen to users and focus on stuff they want and/or need. You generally get enough loyalty to be able to move a bit behind the curve, see what works for others first, then do your own version once it's clear how it'll be valuable to the user base. I'd give this strategy the highest chance of long term survival and impact.
Mainstream is way tougher. You kinda need to make all kinds of people with different needs happy enough, and get ahead of where those wants and needs are going.
One could argue they could do both: Serve a niche well with Firefox and try to reach the mainstream with other products. I think to some degree they've tried it, with mixed results.
The mainstream didn't get mainstream by striving to go with mainstream. They got there by serving a niche well and then expanding the niche. Trying to go mainstream without having a niche moat will make you lag behind the establishment endlessly.
I'm not an Apple fan (rather an Apple hater if you would), but they are a perfect example of this. First, have a top quality niche product, then go into the big waters with the vision you got from the niche - and then people will actually be willing to give up bells and whistles the product that is good enough.
Mozilla have a well-established niche with a vision, but they can't monetize without giving up the vision they have (and apparently consider opening for small direct donations or maybe even direct bug/feature crowdsourcing not worth it). So they keep jumping on every sidetrack. And keep losing even the niche they have.
Quite a fascinating adventure, even if it's not continuous.
Good teaching moment for why estimates of big endeavours tend to be off, too. He appears to have slightly overestimated his average walking speed and greatly underestimated breaks (only some of which were by choice from what I gather).
The total journey appears to be 58,000 km (36,000 miles).
Expectation: 8 years, which translates to a daily average of almost 20 km (~12.5 miles). That's about 4-6 hours of walking time at my speed. Every. Single. Day. In sickness or in health, on country roads or through frozen wastelands. Seems optimistic even without anticipating any delays?
Reality: After 8 years, he had actually finished about half the distance, which I already find impressive. As of October, he has 2,213 km (1,375 miles) left. That means he traveled 55,787 km (34,664 miles) in around 27 years. That puts him at a daily average of almost 6 km (~3.7 miles), so probably 1-2 hours of daily walking time. That's actually not bad considering all the delays, but quite a bit less than anticipated.
New estimate: He expects to be home "by 2026", let's say January. Based on that premise, his new estimate is that he will walk 2,213 km in ~4 months. That's a bit more than 17 km (~10.5 miles) per day. Relatively close to his original, comparatively uninformed estimate, funnily enough.
All that said, I don't think I'd have the willpower to see this through, especially considering all the setbacks. Mighty impressive.
In my experience, "font" is the colloquial term referring to either. Programmers get to demand precision, for journalists it's a bit tougher. The de facto meaning of terms does, unfortunately, evolve in sometimes arbitrary ways. And it's tough to fight.
That would be considered a derivative work of the C code, therefore copyright protected, I believe.
Can you replay all of your prompts exactly the way you wrote them and get the same behaviour out of the LLM generated code? In that case, the situation might be similar. If you're prodding an LLM to give you a variety of resu
But significantly editing LLM generated code _should_ make it your copyright again, I believe. Hard to say when this hasn't really been tested in the courts yet, to my knowledge.
The most interesting question, to me, is who cares? If we reach a point where highly valuable software is largely vibe coded, what do I get out of a lack of copyright protection? I could likely write down the behaviour of the system and generate a fairly similar one. And how would I even be able to tell, without insider knowledge, what percentage of a code base is generated?
There are some interesting abuses of copyright law that would become more vulnerable. I was once involved in a case where the court decided that hiding a website's "disable your ad blocker or leave" popup was actually a case of "circumventing effective copyright protection". In this day and age, they might have had to produce proof that it was, indeed, copyright protected.
"Can you replay all of your prompts exactly the way you wrote them and get the same behaviour out of the LLM generated code? In that case, the situation might be similar. If that's not the case, probably not." Yes and no. It's possible in theory, but in practice it requires control over the seed, which you typically don't have in the AI coding tools. At least if you're using local models, you can control the seed and have it be deterministic.
That said, you don't necessarily always have 100% deterministic build when compiling code either.
That would be interesting. I don't believe getting 100% the same bytes every time a derivative work is created in the same way is legally relevant. Take filters applied to copyright protected photos - might not be the exact same bytes every time you run it, but it looks the same, it's clearly a derivative work.
So in my understanding (not as a lawyer, but someone who's had to deal with legal issues around software a lot), if you _save_ all the inputs that will lead to the LLM creating pretty much the same system with the same behaviour, you could probably argue that it's a derivative work of your input (which is creative work done by a human), and therefore copyright protected.
If you don't keep your input, it's harder to argue because you can't prove your authorship.
It probably comes down to the details. Is your prompt "make me some kind of blog", that's probably too trivial and unspecific to benefit from copyright protection. If you specify requirements to the degree where they resemble code in natural language (minus boilerplate), different story, I think.
(I meant to include more concrete logic in my post above, but it appears I'm not too good with the edit function, I garbled it :P)
Google, Meta and Microsoft would have to compete on demand, i.e. users of the chat product. Not saying they won't manage, but I don't think the competition is about ad tech infrastructure as much as it is about eyeballs.
It might take Microsoft's Bing share, but Google and Meta pioneered the application of slot machine variable-reward mechanics to Facebook, Instagram and Youtube, so it would take a lot more than competing on demand to challenge them.
reply