> the same thing happened to the Mac chess app with the release of the M1
I fired up Chess shortly after getting an M1 and got destroyed a bunch of times. I thought that I was just extremely out of practice and quit playing for years. I guess it's better to find out late rather than never.
As a child I lived near Gorey and accumulated a collection of little picture books signed by him. As a child I never knew what to make of them...they weren't anything like Calvin & Hobbes or The Far Side.
Until reading this I'd pretty much forgotten about them. I'll have to go pull them out to see if I understand them better as an adult.
New Englander here. Gotten is normal vocabulary. If it's not used in British English, then it's probably a feature of North American English, since most North American linguistic differences are snapshots of common features of 16th-17th century British English that somehow ossified over here.
I've always wondered why voxel engines tend to produce output that looks so blocky. I didn't realize it was a performance issue.
Still, games like "C&C: Red Alert" used voxels, but with a normal mapping that resulted in a much less blocky appearance. Are normal maps also a performance bottleneck?
I originally chose to go with axis-aligned blocks and hard axis-aligned normals because I liked the aesthetic. I've since slightly course-corrected; each voxel has bent normals which follow the surface. How much the normals are bent is artist configurable. This has the effect of smoothing out the look of the surface when viewing from a distance, but still gives the distinct blocky look when up close.
In terms of performance, there is a cost to having fully 3D normals per voxel, but it's certainly manageable. There's a lot of other, more expensive, stuff going on.
Before Minecraft, basically all voxel engines used some form of non-axis-aligned normals to hide the sharp blocks. Those engines did this either through explicit normal mapping, or at the very least, by deriving intermediate angles from the Marching Cubes algorithm. Nowadays, the blocky look has become stylish, and I don't think it really even occurs to people that they could try to make the voxels smooth.
Voxels have been around since the 1980s. The smoothness came from that beautiful CRT and its inability to display crisp images. Normals weren’t really used until early 90s and used heavily by games like Comanche by NovaLogic.
The reason why Minecraft voxels are blocks is because Notch (Markus Persson) famously said he was “Not good at art”. He didn’t implement the triangulation and kept them unit blocks. Games that had voxels AND were triangulated that came before Minecraft were Red Faction, Delta Force, Outcast just to name a few.
The point is, voxels aren’t anything special, no more than a texel, or a vertex, or a splat, a normal, or a uv. It’s just a representation of 3D space (occupied/not occupied) and can just as easily be used for culling as it can for rendering. The Minecraft style because popular because it reminds people of pixels, it reminded people of legos, and Minecraft was so popular
It depends on how the voxels relate to the gameplay.
Regardless of the original intent, in Minecraft the voxel grid itself is a very important aspect of the core gameplay loop. Smoothing the voxel visual representation disguises the boundaries between individual logical voxels and makes certain gameplay elements more difficult or frustrating for the player. When the visuals closely (or exactly) match the underlying voxel grid, it's easy for the player to see which specific voxel is holding back some lava or if they're standing on the voxel they're about to break.
In Minecraft you can, for example, clearly count how many voxels wide something is from a distance, because the voxels are visually obvious.
In Red Faction, you're never concerned with placing or breaking very specific voxels in very specific locations, so it's not an issue.
So your point is, Minecraft uses voxels on a unit scale. Red faction uses voxels differently, so Minecraft wins?
I get the appeal of Minecraft but Notch didn’t invent this stuff as much as you would love to believe. He simply used it in a way that made it easy to understand. To the point where people like you are explaining it to me like I have never played it. I have. I was one of the first testers.
Almost all of Minecraft is ripped off other games. World creation, dwarf fortress. Mining, dig dug. The only original thing was The Creeper.
This seems like a needlessly antagonistic response? GP was only pointing out that the voxel shape is fundamentally important to Minecraft. It's not just a matter of Notch's artistic talent, as you said.
Anyway I don't think anybody is saying Notch invented this stuff or Minecraft was the first to do certain things. But it's probably worth pointing out that, ripped off or no, those other games haven't become remotely close to the popularity of Minecraft, so Notch clearly did something right... maybe the Creepers are why?
I don't think this should be understated. LEGO are easy and fun to build with and don't require a lot of artistic talent. The same goes for block-based games like Minecraft.
I never played Top Gun, but I did grow up playing "Turn and Burn: No Fly Zone" for the SNES. All these years later, it's still amazing to me how much the graphics improved from one console generation to the next. I don't remember any other console transition being so consequential from a graphics perspective.
Super Mario 64 was an N64 launch title. Resident Evil 4 was a late Gamecube title. In my mind that's probably the biggest gap in graphical fidelity between generations of console. But I can see how going from NES games like Super Mario Bros to SNES games like Star Fox would be a close contender.
PS1 -> PS2 -> PS3 or Xbox -> 360 feel more iterative because they started after the 3D era had already begun. We haven't had a new dominant paradigm for gaming since then (besides mobile gaming).
The premise is extremely flawed. If users are able to generate their own apps instead of having to buy them, it shrinks the TAM for those apps. If a meatpacker makes its own CRM, it's not going to put it on an app store or try to sell it!
Building software and publishing software are fundamentally two different activities. If AI tilts the build vs. buy equation too far into the build column, we should see a collapse in the published software market.
The canary will be a collapse in the outsourced development / consulting market, since they'd theoretically be undercut by internal teams with AI first -- they're expensive and there's no economy of scale when they're building custom software for you.
Right but now you're talking about 5 or 20 or 100 or 1000 companies building CRM software. They're basically doing the mostly the same work over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and over and (I would like you to know that I typed every single one of these "and over"s with my very own fingers) and over and over and over and over and over and over and over and over and over and over and I think only the AI companies really benefit from that.
I feel silly explaining this as if it's a new thing, but there's a concept in social organization called "specialization" in which societies advance because some people decide to focus on growing food while some people focus on defending against threats and other people focus on building better tools, etc. A society which has a rich social contract which facilitates a high degree of specialization is usually more virile than a subsistence economy in which every individual has all the responsibilities: food gathering, defense, toolmaking, and more.
I wonder if people are forgetting this when they herald the arrival of a new era in which everyone is the maker of their own tools...
The thing is that also empowering individuals to do specialized activities by way of a tool (instead of themselves having to specialize) is a hallmark of progress? Like I don’t need a “professional” to wash my clothes, I don’t need to wash my clothes myself. I use a washing machine.
I don’t need to hire a programmer. I don’t need to be a programmer. I can use a tool to program for me.
(We sure as hell aren’t there yet, but that’s a possibility).
> (We sure as hell aren’t there yet, but that’s a possibility)
What makes you think so?
Most of the stuff I've read, my personal experience with the models, and my understanding of how these things work all point to the same conclusion:
AI is great at summarization and classification, but totally unreliable with generation.
That basic unreliablity seems to fundamental to LLMs, I haven't seen much improvement in the big models, and a lot of the researchers I've read are theorizing that we're pretty close maxing out what scaling training and inference will do.
This seems really vague. What does "totally unreliable" mean?
If you mean that a completely non-technical user can't vibe code a complex app and have it be performant, secure, defect-free, etc, then I agree with you. For now. Maybe for a long time, we'll see.
But right now, today, I'm a professional software engineer with two decades of experience and I use Cursor and Opus to reliably generate code that's on par with the quality of what I can write, at least 10x faster than I can write it. I use it to build new features, explore the codebase, refactor existing features, write documentation, help with server management and devops, debug tricky bugs, etc. It's not perfect, but it's better than most engineers I've worked with in my career. It's like pair programming with a savant who knows everything, some of which is a little out of date, who has intermediate level taste. With a tiny bit of steering, we're an incredibly productive duo.
I know the tech is here to stay, and the best parts of it are where it provides accessibility and tears down barriers to entry.
My work is to make sure that you don't need to reach for AI just because human typing speed is limited.
I love to think in terms of instruments versus assistants: an assistant is unpredictable but easy to use. It tries to guess what you want. An instrument is predictable but relatively harder to use. It has a skill curve and perhaps a skill cap. The purpose of an instrument is to directly amplify the expressive power of its user or player through predictable, delicately calibrated responses.
My experience has been much worse. Random functions with no purpose, awful architecture with no theory of mind, thousands of lines of comprehension debt, bugs that are bizarre and difficult to track down and reason about...
This coupled with the occasional time when it "gets it right".
Those moments make me feel like I saved time, but when I truly critically look at my productivity, I see a net decline overall, and I feel myself getting dumber and losing my ability to come up with creative solutions.
I have used Claude to write a lot of code. I am however already a programmer, one with ~25 years of experience. I’ve also lead organizations of 2-200 people.
So while I don’t think the world I described exists today — one where non-programmers, with neither programming nor programmer-management experience, use these tools to build software — I don’t a priori disbelieve its possibility.
Your washing machine can only deal with certain classes of clothing. It will completely destroy others, and has no way to determine what clothing has been put into it. Meanwhile, the average untrained-but-conscientious human will, at worst, damage a small portion of an item of clothing before spotting the problem and acting to mitigate it. (If the clothing is "absolutely must never come into contact with water" levels of dry-clean only, they might still trash the whole item, but they aren't likely to make the same mistake twice.)
Programming is far more the latter kind of task than the former. Data-processing or system control tasks in the "solve ordinary, well-specified problem" category are solved by executing software, not programming.
Using an AI is still like hiring someone to do programming work for you. It's going to cost money. Why would you waste money? We have sewing machines, but you don't make all your own clothes do you?
If the cost of the raw materials and worker were less than the price tag at the store, sure, I would probably opt to make my own clothes. They would fit me perfectly, and I can get the right shade of blue instead of bluish.
In the case of AI, Claude costs $100 or $200/mo for really good coding tasks. This is much less expensive than hiring someone to do the same thing for me.
Both. I would note that "real production code" is not necessarily a high bar. For example it does not rule out gross negligence. Most of the companies that outsource their thinking and working to Claude will die of it.
I have a different point of view. Claude code is extremely good at creating and maintaining solid, everyday code including Ansible playbooks (used in production), creating custom dev/ops scripts for managing servers (again, used in production), creating Grafana dashboards (again, production), comparing database performance between nodes, etc. Just because a person did not hand-write this code does not make it any less production ready. In fact, Claude reviewed our current Ansible code base and already highlighted a few errors (the files written by hand). Plus, we get the benefit of having Claude write and execute test plans for each version we create. Well worth the $100/mo we pay.
And to your note that real production code is not necessarily a high bar, what is "real production code"? Does it need to be 10,000 lines of complex C/rust code spread across a vast directory structure that requires human-level thinking to be production ready? What about smaller code bases that do one thing really well?
Honestly, I think many coders here on HN dismiss the smaller, more focused projects when in reality they are equally important as the large, "real" production projects. Are these considered non-production because the code was not written by hand?
All it sounds like to me is that Ansible is production-ready, Grafana is production ready, the compilers and runtimes you're using are production-ready.
Each of those things is a mountain of complexity compared to the molehill of writing a single script. If you're standing on top of a molehill on top of a mountain, it's not the molehill that's got your head in the clouds.
I see so many people quote that damnable Heinlein quote about specialization being for insects as if it's some deep insight worth making the cornerstone of your philosophy, when in fact a) it's the opinion of a character in the book, and b) it is hopelessly wrong about how human beings actually became as advanced as we are.
We're much better off taking the Unix philosophy (many small groups of people each getting really really good at doing very niche things, all working together) to build a society. It's probably still flawed, but at least it's aimed in the right direction.
Not only AI companies benefit the companies themselves benefit too from getting software that actually meets (more of) their needs rather than whatever some dev imagined they might need.
That's right. Infact, I see more outsourcing to happen, due to risk delegation and complexity management. AI would only make humans lazier and risk averse. Complexity of regulations, government reach, security risks would only increase. Risk can't distributed to AI employees (agents). A supervisor of AI agent populations can't be held responsible for all the bugs and complexity in a AI-generated product.
A market doesn't have to shrink all that much before there's a collapse. Generally it's quite gradual, and then very sudden. There's a tipping point where a market cannot sustain a public company and their structural overhead and have declining revenue. Investors don't want to invest in shrinking markets because it's a guaranteed way to lose money. This leads to share price collapse and the sudden rapid destruction of market incumbents.
There might be a good reason for the lack of stories. FoundationDB runs critical infrastructure I work on, but I never actually have to think about it.
I've never spent less time thinking about a data store that I use daily.
I fired up Chess shortly after getting an M1 and got destroyed a bunch of times. I thought that I was just extremely out of practice and quit playing for years. I guess it's better to find out late rather than never.
reply