Hacker Newsnew | past | comments | ask | show | jobs | submit | FieryMechanic's commentslogin

Go been around for quite a while now. It isn't going anywhere.


Rust has been around for over 10 years now. In the last five years the language hasn't changed much and has gotten better and better


I didn't like Rust one bit and gave up learning it. Go on the other hand is quite nice.


The issue is that every other week there is a rewrite of something in Rust. I just do an eyeroll whenever I see that yet another thing is being rewritten in Rust.

I've tried compiling large projects in Rust in a VM (8GB) and I've run out of memory whereas I am sure a C/C++ large project of a similar size wouldn't run out of memory. A lot of this tooling I had to compile myself because it wasn't available for my Linux distro (Debian 12 at the time).

A lot of the tooling reminds me of NPM, and after spending a huge amount of my time fighting with NPM, I actually prefer the way C/C++/CMake handles stuff.

I also don't like the language. I do personal stuff in C++ and I found Rust really irritating when learning the language (the return rules are weird) and just gave up with it.


> whereas I am sure a C/C++ large project of a similar size wouldn't run out of memory

This is a common issue on large C++ projects for users with limited resources. You typically have to tinker with build settings and use a different linker etc.. to get the project building without OOMing.

> A lot of the tooling reminds me of NPM

My feeling is that you'd have the same criticism about a lot of other modern ecosystems because they allow you to pull dependencies easily. With things like vcpkg, that's also possible in C++ but even without it: nothing stops the same behavior of importing "random" code from the internet.


A lot of the modern tooling feels like a rube goldberg machine when it goes wrong. If you are forced (like I am) because of byzantine corporate rules to use older version of said tools, it is extremely painful. Don't get me started on stuff like babel, jest, ts-jest and all that gubbings. AI been a godsend because I can ask Claude what I want to do.

I use vendor directory in my C++ project and and git submodules and I've got a build that works cross platform. The biggest issue I ran into was MinGW and GCC implement the FileSystem library differently (I don't know why). It wasn't too difficult to fix.


To be fair, you haven't explained why it's an issue to see projects being rewritten in Rust, other than it is a bit annoying to you?

For me, I had a very good experience rewriting a project in Rust (from Python). It was just an improvement in every regard (the project was easier to build and distribute, it was a good opportunity to rearchitect it, and the code ended up 20k times faster.) So, I have the opposite feeling when I see titles like these.

I also feel the opposite about the tooling. For me, cmake, npm, pip, maven, etc. all live in this realm where any invocation could become a time purgatory. The big thing that makes me like Rust is that I've never had that experience with cargo. (In fact, pip was the primary motivation to move away from Python to Rust. Everything else was just nice.)

I'm not saying this to convince you to feel otherwise, I just wanted to offer how I felt about the same topics. I grew up with C and Java, but Rust clicked after I read SICP and followed along in Racket. I could probably fill a page of small grievances about Rust's syntax if I had to.


Rewriting stuff is largly a waste of time unless the underlying design/product is flawed. You are going to have to solve the same challenges as before but this time in Rust.

Anyone that been on a "rewrite" knows that often the end result will look like the previous implementation but in <new thing>.

So what I see is a lot of development effort to re-solve problems that have already been solved. I think Ubuntu did this with the core-utils recently (I don't keep up with the Linux dramas as there is a new one every week and tbh it isn't interesting a lot of the time). They ended up causing bunch of issues that I believe were already fixed years ago.

There are issues with things in Linux land that have been issues for years and haven't been resolved and I feel that development effort would have probably been better spent there. I don't pay canonical employees though, so I guess I don't get to decide.


> A lot of the tooling reminds me of NPM, and after spending a huge amount of my time fighting with NPM, I actually prefer the way C/C++/CMake handles stuff.

I guess you and me live different lives because I have spent far more time messing with ancient C/C++/CMake/Automake/Autoconf/configure/just have this particular .so file in this particular spot/copy some guys entire machine because this project only builds there/learn and entirely different language just to write build files (CMake sucks and 5 other alternatives are just lipstick on a pig) etc. etc.

I am of the opinion that half of Rusts success is based on that fact that C/C++'s tooling is annoying and ancient. People want to write code not mess with build envs.


The issue you are describing isn't an problem with C/C++ tooling. It is to do with how the developer dealt with dependencies i.e. poorly. BTW this happens in any language if they do that. I've had to deal with some awful C# code bases that would only compile on one guys machine.

I am a hobbyist C/C++ developer and have a intermediate size code base that builds on Linux, Windows (MinGW and MSVC) and MacOS and I didn't do anything particularly special. What I did do is setup CI early on in my project. So I had to fix any issues as I went along.


Sure. That doesn't matter. Of course you can keep writing your C/C++ or using CMake, nobody is going to stop that. But other people's project are not going to stop adopt new tech stack because how you feel about it.


> That doesn't matter. Of course you can keep writing your C/C++ or using CMake, nobody is going to stop that.

What it is going to cause is having to learn a bunch of new tooling which I have to somehow to get behaving on my Debian box, because a particular tool that I will need to compile from source will have a bunch of Rust dependencies.

I've already run into this BTW, where I wanted to compile something in Rust and it needed a third party task runner called "just". So I then needed to work out how to install/compile "just" to follow the build instructions.

Why they needed yet another task runner, when I am pretty sure make would have been fine, is beyond me.

> But other people's project are not going to stop adopt new tech stack because how you feel about it.

I don't expect them to. That doesn't mean I can't comment on the matter.


RE: memory, any self-respecting CI/CD system will allow you to compile any Rust project without out-of-memory halts.

RE: NPM, you have a right to a preference of course. I certainly don't miss the times 20 years ago when pulling in a library into a C++ project was a two-week project in itself. make and CMake work perfect right up until they don't and the last 2% cost you 98% of the time. Strategically speaking, using make/CMake is simply unjustified risk. But this is of course always written off as "skill issue!" which I gave up arguing against because the arguers apparently have never hit the dark corners. I have. I am better with Just and Cargo. And Elixir's mix / hex.

What you like as a language (or syntax) and what you roll your eyes at are obviously not technological or merit-based arguments. "Weird" I can't quite place as an argument either.

Use what you like. We all do the same. The original article lists good arguments in favor of Rust. Seems like a good case of "use the right tool for the job" to me.


> RE: NPM, you have a right to a preference of course. I certainly don't miss the times 20 years ago when pulling in a library into a C++ project was a two-week project in itself. make and CMake work perfect right up until they don't and the last 2% cost you 98% of the time. Strategically speaking, using make/CMake is simply unjustified risk. But this is of course always written off as "skill issue!" which I gave up arguing against because the arguers apparently have never hit the dark corners. I have. I am better with Just and Cargo. And Elixir's mix / hex.

I've lost countless hours having to get the rube goldberg machine of npm, jest, typescript, ts-jest and other things to work together. In contrast when I was learning OpenGL/Vulkan and general 3d programming, I decided to bite the bullet and just do C++ from the start as that was what all the books/examples were in. I had been told by countless people how hard it all was and how terrible the build systems were. I don't agree, I think the JS ecosystem is far worse than make and CMake. Now I am already an experienced programmer that already knew C# and Java, maybe that helped as they have many of the same concepts as C++.

Now I did buy and read a book on CMake and I did read the C++ 11 book by Bjarne Stroustrup (I found it second hand on ebay I think).

> What you like as a language (or syntax) and what you roll your eyes at are obviously not technological or merit-based arguments.

They aren't. What I am trying to convey is that it feels like a lot of things are done because it is the new shiny thing and it is good for resume/CV padding.

> "Weird" I can't quite place as an argument either.

The return keyword is optional IIRC in some circumstances. I think that is weird. I think I stopped there because I just wasn't enjoying learning it and there are zero jobs in my area of it.

> Use what you like. We all do the same.

The issue is that I think it (Rust) is going to worm it way everywhere and I will be forced to deal with it.


RE: make/CMake, consider yourself in a somewhat privileged bubble and IMO let's leave it at that. Every single time I had to deal with make or CMake, I regretted it: every time when I used them in anger and to more of their capacity, with barely any exceptions. The only safe usages of make were like 50-line files max. And even there I had to manually eyeball the .PHONY targets to make sure I know which sub-commands I can invoke that don't depend on file timestamps because of course somebody decided it's a good idea to liberally mix those with the others. Sigh. Don't ask.

I don't _hate_ them or anything. They are super solid tools and I have derived a lot of value out of them in a previous life. But they leave a lot of room for overly clever humans to abuse them and make life hard for the next guy. Which is exactly how I was exposed to them, dozens of times.

The downfall of all flexible tools or PLs is that people start using them in those obscure and terrible ways. Not the fault of the tools or the PLs.

If you can control 100% of the surface where you are exposed to make/CMake then that puts you in a fairly rare position and you are right to make full use of it! Go for it. Deep work and deep skills are sorely needed in our area. I am rooting for you. :)

> The issue is that I think it (Rust) is going to worm it way everywhere and I will be forced to deal with it.

Help me understand the actual technical criticism buried in the "worm its way everywhere", please. And in the "it's good for CV/Resume padding" statement as well.

It's a strange thing to say and it smells like a personal vendetta which is weirdly common on HN about Rust and to this day I have no idea why even though I have asked many people directly.

Rust has objective technical merits and many smart devs have documented those in their blogs -- journeys on rewrites or green-field projects, databases, network tools (like OP), and even others. Big companies do studies and prove less memory safety bugs over the course of months or years of tests. The Linux kernel devs (not unanimously) have agreed that Rust should no longer have experimental status there recently -- and people are starting to write Linux drivers in Rust and they work.

I am honestly not sure what would satisfy the people who seem to hate Rust so passionately. I guess it announcing full disband and a public apology that it ever existed? Yes this is a bit of a sarcastic question but really, I can't seem to find a place on the internet where people peacefully discuss this particular topic. (I have seen civil exchanges here on HN of course, and I love them. But most of the civil detractors ultimately simply admit they don't have a use for Rust. Again, that is very fair and valid but it is not an actual criticism towards any tech.)


> I don't _hate_ them or anything. They are super solid tools and I have derived a lot of value out of them in a previous life. But they leave a lot of room for overly clever humans to abuse them and make life hard for the next guy. Which is exactly how I was exposed to them, dozens of times.

I don't understand why many people on here say "humans", instead of people. It sounds like you are talking as if you are a grey alien on a space ship somewhere.

What you are complaining about is abuse of tools/language features. This can happen in any language and/or tools.

That doesn't mean the tool itself is bad.

> Help me understand the actual technical criticism buried in the "worm its way everywhere", please. And in the "it's good for CV/Resume padding" statement as well.

They are self explanatory. I don't like it when someone does this stupid game of not understanding common idioms.

> It's a strange thing to say and it smells like a personal vendetta which is weirdly common on HN about Rust and to this day I have no idea why even though I have asked many people directly.

It isn't. What typically happens is that a tool lets call it "Y" get used everywhere to the point where you cannot use "X" without "Y".

Ruby used to get used for CV padding. I used to work in a Windows/.NET shop and someone wrote a whole service using Ruby on Rails in a Suse Linux. That person left and got a job doing Rails shortly after.

> Rust has objective technical merits and many smart devs have documented those in their blogs -- journeys on rewrites or green-field projects, databases, network tools (like OP), and even others. Big companies do studies and prove less memory safety bugs over the course of months or years of tests.

Just because <large company> does something and says something is true doesn't mean it is or is suitable for everyone or it is actually true.

I have worked at many <large corps> as a contractor and found that the reality presented to the outside world is very different to what is actually happening in the building.

e.g.

I was working at a large org that rewrote significant portions of their code-base in new language instead of simply migrating their existing code-base to a new runtime.

I made plenty of money as a contractor, but it was a waste resources and the org lost money for 3 years as a result.

BTW they never fully transitioned over to the new code-base.

Company blogs and press releases will say it was a success. I know for a fact it wasn't.

> The Linux kernel devs (not unanimously) have agreed that Rust should no longer have experimental status there recently -- and people are starting to write Linux drivers in Rust and they work.

So I will need any additional toolchain to build Linux drivers. This is what is meant by "worming its way in". I have done a LFS build and it takes a long time to get everything built as it is.

> I am honestly not sure what would satisfy the people who seem to hate Rust so passionately. I guess it announcing full disband and a public apology that it ever existed? Yes this is a bit of a sarcastic question but really, I can't seem to find a place on the internet where people peacefully discuss this particular topic.

You are making assumptions that I hate Rust. I don't. I just don't care for it.

What I do hate is hype and this constant cycle of the IT industry deciding that everything has to be rewritten again in <new thing> because it is trendy. I have personally been through it many times now, both as an end user and as a developer making the transition to the <new thing>.


You are kind of ranting on general trends here, not to mention misrepresenting what I said which I cannot see as arguing in good faith (never said that big companies using something makes it good; I said that they did studies -- that's absolutely very much not the same as many other types of cargo culting that they do which I'll agree is never credible).

I am not pretending to not understanding anything as well by the way, I was trying to find objective technical disagreements and still can't find any in your reply. I am seeing a bit of curmudgeon-ing on several places though, so I am bowing out.


> You are kind of ranting on general trends here, not to mention misrepresenting what I said which I cannot see as arguing in good faith (never said that big companies using something makes it good; I said that they did studies -- that's absolutely very much not the same as many other types of cargo culting that they do which I'll agree is never credible).

I didn't misrepresent anything you said. You misunderstood what I said. I said companies will claim all sorts of things and the reality behind the scenes is very different. Having a study is one form of making claims. What works at one company may not work at another.

> I am not pretending to not understanding anything as well by the way,

Yes you were. What I said was plainly obvious.

CV driven development is a well known and understood phenomenon.

Technology stacks being subverted is a well known and understood phenomenon.

It is very annoying when people pretend not to understand basic idioms. It is a dishonest tactic employed by people online, I've been online now since the late 90s and have seen it done many times. You are not the first and won't be the last.

> I was trying to find objective technical disagreements and still can't find any in your reply.

I gave you them. Several in fact.

Part of engineering is understanding that resources aren't infinite and you have to make trade offs. So how resources (money, time, man power, compute) is used is technical. I make calls all the time on whether something is worth doing based on the amount of time I have.

This is often discussed on many blogs, podcast and books about software engineering.

What you want to do is narrow discussion down to the sort of discussion "well they found they found X more bugs using Y technique". Ignoring the fact that they may had to spend a huge amount of man power to rebuild everything and created many more bugs in the process.

> I am seeing a bit of curmudgeon-ing on several places though, so I am bowing out.

Yes I am disillusioned with the industry after working in it for 20 years. That doesn't invalidate what I say.


The way most scrapers work (I've written plenty of them) is that you just basically get the page and all the links and just drill down.


So the easiest strategy to hamper them if you know you're serving a page to an AI bot is simply to take all the hyperlinks off the page...?

That doesn't even sound all that bad if you happen to catch a human. You could even tell them pretty explicitly with a banner that they were browsing the site in no-links mode for AI bots. Put one link to an FAQ page in the banner since that at least is easily cached


When I used to build these scrapers for people, I would usually pretend to be a browser. This normally meant changing the UA and making the headers look like a read browser. Obviously more advanced techniques of bot detection technique would fail.

Failing that I would use Chrome / Phantom JS or similar to browse the page in a real headless browser.


I guess my point is since it's a subtle interference that leaves the explicitly requested code/content fully intact you could just do it as a blanket measure for all non-authenticated users. The real benefit is that you don't need to hide that you're doing it or why...


You could add a feature kind of like "unlocked article sharing" where you can generate a token that lives in a cache so that if I'm logged in and I want to send you a link to a public page and I want the links to display for you, then I'd send you a sharing link that included a token good for, say, 50 page views with full hyperlink rendering. After that it just degrades to a page without hyperlinks again and you need someone with an account to generate you a new token (or to make an account yourself).

Surely someone would write a scraper to get around this, but it couldn't be a completely-plain https scraper, which in theory should help a lot.


I would build a little stoplight status dot into the page header. Red if you're fully untrusted. Yellow if you're semi-trusted by a token, and it shows you the status of the token, e.g. the number of requests remaining on it. Green if you're logged in or on a trusted subnet or something. The status widget would links to all the relevant docs about the trust system. No attempt would be made to hide the workings of the trust system.


And obviously, you need things fast, so you parallelize a bunch!


I was collecting UK bank account sort code numbers (to a buy a database at the time costs a huge amount of money). I had spent a bunch of time using asyncio to speed up scraping and wondered why it was going so slow, I had left Fiddler profiling in the background.


As someone with 700 hours in the game, I've played the game both on Windows and Linux.

A lot of issues are to do with the fact that the game seems to corrupt itself. If I have issues (usually performance related), I do a steam integrity check and I have zero issues afterwards. BTW, I've had to do this on several games now, so this isn't something that is unique to HellDivers. My hardware is good BTW, I check in various utils and the drives are "ok" as far as I can tell.

> - To their PC not reboot and BSOD (was a case few months ago)

This was hyped up by a few big YouTubers. The BSODs was because their PCs were broken. One literally had a burn mark on their processor (a known issue with some boards/processor combos) and the BSODs went away when they replaced their processor. This tells me that there was something wrong with their PC and any game would have caused a BSOD.

So I am extremely sceptical of any claims of BSODs because of a game. What almost is always the case is that the OS or the hardware is at issue and playing a game will trigger the issue.

If you are experiencing BSODs I would make sure your hardware and OS are actually good, because they are probably not. BTW I haven't a BSOD in Windows for about a decade because I don't buy crap hardware.

> - Be able to actually finish a mission (game still crashes a lot just after extraction, it's still rare for the full team to survive 3 missions in a row)

False. A few months ago I played it for an entire day and the game was fine. Last week I played it a good portion of Saturday night. I'm in several large HellDivers focused Discord servers and I've not heard a lot of people complaining about it. Maybe 6 months ago or a year ago this was the case, but not now.

> Be able to use weapon customisation (the game crashed, when you navigated to the page with custom paints)

This happened for like about a week for some people and I personally didn't experience this.

> To not issue stim/reload/weapon change multiple times, for them just to work (it's still normal to press stim 6 times in some cases, before it activates, without any real reason)

I've not experience this. Not heard anyone complain about this and I am in like 4 different HellDivers focus'd discord servers

> Not be attacked by enemies that faze through buildings

This can be annoying, but it happens like once in a while. It isn't the end of the world.


> So I am extremely sceptical of any claims of BSODs because of a game.

Generally speaking, I am too. That is unless there is kernel-level anticheat. In that case I believe it's fair to disregard all other epistemological processes and blame BSODs on the game out of principle


I had them and I keep observing this strange tendency to wipe that particular issue out of existence


> In that case I believe it's fair to disregard all other epistemological processes and blame BSODs on the game out of principle

I am sorry but that is asinine and unscientific. You should blame BSODs on what is causing them. I don't like kernel anti-cheat but I will blame the actual cause of the issues, not assign blame on things which I don't approve of.

I am a long time Linux user and many of the people complaining about BSODs on Windows had a broken the OS in one way or another. Some were running weird stuff like 3rd party shell extensions that modify core DLLs, or they had installed every POS shovelware/shareware crap. That isn't Microsoft's fault if you start running an unsupported configuration of the OS.

Similarly. The YouTubers that were most vocal about HellDivers problems did basically no proper investigation other than saying "look it crashed", when it was quite clearly their broken hardware that was the issue. As previously stated their CPU had a burn mark on one of the pins, some AM5 had faults that caused this IIRC. So everything indicated hardware failure being the cause of the BSOD. They still blamed the game, probably because it got them more watch time.

During the same time period when people were complaining about BSODs, I didn't experience one. I was running the same build of the game as them and playing on the same difficulty and sometimes recording it via OBS (just like they were). What I didn't have was a AM5 motherboard, I have and older AM4 motherboard which doesn't have these problems.


> that is asinine and unscientific

Well, yes. I did say something to that effect. Blaming BSODs on invasive anti-cheat out of principle is a political position, not a scientific one.

> During the same time period when people were complaining about BSODs, I didn't experience one. I was running the same build of the game as them and playing on the same difficulty and sometimes recording it via OBS (just like they were). What I didn't have was a AM5 motherboard, I have and older AM4 motherboard which doesn't have these problems.

I understand what you're saying here, but anyone who does a substantial amount of systems programming could tell you that hardware-dependent behavior is evidence for a hardware problem, but does not necessarily rule out a software bug that only manifests on certain hardware. For example, newer hardware could expose a data race because one path is much faster. Alternatively, a subroutine implemented with new instructions could be incorrect.

Regardless, I don't doubt that this issue with Helldivers 2 was caused by (or at least surfaced by) certain hardware, but that does not change that given such an issue, I would presume the culprit is kernel anticheat until presented strong evidence to the contrary.


> Well, yes. I did say something to that effect. Blaming BSODs on invasive anti-cheat out of principle is a political position, not a scientific one.

When there are actual valid concerns about the anti-cheat, these will be ignored because of people that assigned blame to it when unwarranted. This is why making statements based on your ideology can be problematic.

> I understand what you're saying here, but anyone who does a substantial amount of systems programming could tell you that hardware-dependent behavior is evidence for a hardware problem, but does not necessarily rule out a software bug that only manifests on certain hardware. For example, newer hardware could expose a data race because one path is much faster. Alternatively, a subroutine implemented with new instructions could be incorrect.

People were claiming it was causing hardware damage which is extremely unlikely since both Intel, AMD and most hardware manufacturers have mechanisms which prevent this. This isn't some sort of opaque race condition.

> RI would presume the culprit is kernel anti-cheat until presented strong evidence to the contrary.

You should know that if you making assumptions without evidence that will often lead you astray.

I don't like kernel anti-cheat and would prefer for it not to exist, but making stupid statements based on ideology instead of evidence just makes you look silly.


> - To their PC not reboot and BSOD (was a case few months ago)

I was just about to replace my gpu (4090 at that!), I had them 3 times a session. I did sink a lot of hours to debug that (replaced cables, switched PSUs between desktops) and just gave up. After few weeks, lo and behold, a patch comes out and it all disappears.

A lot of people just repeat hearsay about the game


> > - Be able to actually finish a mission (game still crashes a lot just after extraction, it's still rare for the full team to survive 3 missions in a row)

> False. A few months ago I played it for an entire day and the game was fine. Last week I played it a good portion of Saturday night. I'm in several large HellDivers focused Discord servers and I've not heard a lot of people complaining about it. Maybe 6 months ago or a year ago this was the case, but not now.

I specifically mean the exact time, right after the pelican starts to fly. I keep seeing "<player> left" or "disconnected". Some come back and I have a habit of asking: "Crash?", they respond with "yeah"


If that is happening, they need to do a Steam Integrity check. I understand the game is buggy, but it isn't that buggy.


It's basically an Internet fable at this point that there's "a game that physically damages your hardware".

The answer to every such claim is just: no. But it's click bait gold to the brain damage outrage YouTuber brigade.

Accidentally using a ton of resources might e reveal weaknesses, but it is absolutely not any software vendors problem that 100% load might reveal your thermal paste application sucked or Nvidia is skimping on cable load balancing.


Trust me, I'm a software developer with more than two decades of experience. Have been dabbling in hardware since the Amiga 500 era. "I have that specific set of skills" that allows me to narrow down a class of issues pretty well - just a lot of component switching in a binary divide and conquer fashion across hardware.

The issue is 1) actually exaggarated in the community, but not without actual substance 2) getting disregarded exactly because of exaggarations. It was a very real thing.

I also happen to have a multi gpu workstation that works flawlessly too


This was pretty much my take as well. I have an older CPU, Motherboard and GPU combo before the newer GPU power cables that obviously weren't tested properly and I have no problems with stability.

These guys are running an intensive game on the highest difficulty, while streaming and they probably have a bunch of browser windows and other software running background. Any weakness in the system is going to be revealed.

I had performance issues during that time and I had to restart game every 5 matches. But it takes like a minute to restart the game.


They made a decision based on existing data. This isn't unreasonable as you are pretending, especially as PC hardware can be quite diverse.

You will be surprised what some people are playing games on. e.g. I know people that still use Windows 7 on a AMD BullDozer rig. Atypical for sure, but not unheard of.


i believe it. hell i'm in F500 companies and virtually all of them had some legacy XP / Server 2000 / ancient Solaris box in there.

old stuff is common, and doubly so for a lot of the world, which ain't rich and ain't rockin new hardware


My PC now is 6 years old and I have no intention of upgrading it soon. My laptop is like 8 years old and it is fine for what I use it for. My monitors are like 10-12 years old (they are early 4k monitors) and they are still good enough. I am primarily using Linux now and the machine will probably last me to 2030 if not longer.

Pretending that this is an outrageous decision when the data and the commonly assumed wisdom was that there were still a lot of people using HDDs.

They've since rectified this particular issue and there seems to be more criticism of the company after fixing an issue.


A lot of people in the comments here don't seem to understand that it is a relatively small game company with an outdated engine. I am a lot more forgiving of smaller organisations when they make mistakes.

The game has semi-regular patches where they seem to fix some things and break others.

The game has a lot of hidden mechanics that isn't obvious from the tutorial e.g. many weapons have different fire modes, fire rates and stealth is an option in the game. The game has a decent community and people friendly for the most part, it also has the "feature" of being able to be played for about 20-40 minutes and you can just put it down again for a bit and come back.


The bad tutorial at least has some narrative justification. It's just a filter for people who are already useful as shock troops with minimal training.


Not only does the bad tutorial have an in-universe justification; the ways in which it is bad are actually significant to the worldbuilding in multiple ways.

The missing information also encourages positive interactions among the community - newer players are expected to be missing lots of key information, so teaching them is a natural and encouraged element of gameplay.

I stopped playing the game awhile ago, but the tutorial always struck me as really clever.


I also think that the tutorial would be tedious if it went through too much of the mechanics. They show you the basics, the rest you pick up through trial and error.


aye. give me the 3 minute tutorial, not the 37 minute tutorial.

i want to play the game, like now, and i'll read the forums after i figure out that i'm missing something imporant


The tutorial is just fine - here's a gun, here's how you shoot it, here's how you call reinforcements, now go kill some bugs!


considering it still cost 40$ for a 2 year old game, i think they are way beyond the excuse of small team low budget trying to make cool stuff. They have receive shit tons of money and are way to late trying to optimise the game. When it came out it ran so pisspoor i shelved it for a long time. Trying it recently its only marginally better. its really poorly optimised, and blaming old tech is nonsense.

People make much more smooth and complex experiences in old engines.

You need to know your engine as a dev and dont cross its limits at the costs of user-experiences and then blame your tools....

The whole story about more data making load times better is utter rubbish. Its a sign of pisspoor resource management and usage. For the game they have, they should have realized a 130GB install is unacceptable. It's not like they have very elaborate environments. A lot of similar textures and structures everywhere.. its not like its some huge unique world like The Witcher or such games...

There is an astronomical amount of information available for free on how to optimise game engines, loads of books, articles, courses.

How much money do you think they have made so far?

"Arrowhead Game Studios' revenue saw a massive surge due to Helldivers 2, reporting around $100 million in turnover and $76 million in profit for the year leading up to mid-2025, significantly increasing its valuation and attracting a 15.75% investment from Tencent"

75 million in profit but can't figure out how to optimise a game engine. get out.


It costs $40 for a 2-year-old game because the market is bearing $40 for a 2-year-old game.

If anything, it's a testament to how good a job they've done making the game.


The most recent Battlefield released at $80. Arc Raiders released at $40 with a $20 deluxe edition upgrade. I think $40 for a game like Helldivers 2 is totally fair. It's a fun game, worth at least 4 to 8 hours of playtime.


> worth at least 4 to 8 hours of playtime.

Is that supposed to be praise?


It's a comment about cost-to-hourly-entertainment. eg: if in the general sense you're spending $5-$10 per hour of entertainment you're doing at least OK. I understand that a lot of books and video games can far exceed this, but it's just a general metric and a bit of a low bar to clear. (I have a LOT more hours into the game so from my perspective my $40 has paid quite well.)


Ah sorry, I thought "at least" would carry this statement. I've played Helldivers for more than 250 hours personally.

For some reason, though, I tend to compare everything to movie theater tickets. In my head (though it's not true anymore), a movie ticket costs $8 and gives me 1 hour of entertainment. Thus anything that gives me more than 1 hour per $8 is a good deal.

$40 / 4 => $10/hr

$40 / 8 => $5/hr

Thus, I think Helldivers is a good deal for entertainment even if you only play it for under 10 hours.


Thanks, I get what you meant now. I've never liked this comparison because I don't find movie tickets a particularly good deal, but that might just be my upbringing.


I'm not sure what that is supposed to be - it's an online co op game, not a story-driven one like the 4-6 hour FPS games that was a norm at one point.

It's the kind of game where some people spend thousands of hours in, well worth the $40 to them.


Its also wrong. With 10 hours of helldivers 2 you havent seen much of the game at all.

I played it a bit after release and have 230 hours. I liked the game and it was worth my money.


Yeah, I meant "at least" 4-8 hours. Even if you get bored and give up after that, you've gotten your money's worth, in my opinion.

I have almost 270 hours in Helldivers 2 myself. Like any multiplayer game, it can expand to fill whatever amount of time you want to dedicate to it, and there's always something new to learn or try.


I would say until you are about level 60 there are a bunch of mechanics that you won't understand.

> Like any multiplayer game, it can expand to fill whatever amount of time you want to dedicate to it, and there's always something new to learn or try.

Generally at this point I normally do runs where I go full like gas, stun or full fire builds.


Compared to the bigger gaming studios they are small. In fact they are not that much larger than the company I work for (not a game studio).

The fact it is un-optimised can be forgiven because the game has plenty of other positives so people like myself are willing to look over them.

I've got a few hundred hours in the game (I play for maybe an hour in the evening) and for £35 it was well worth the money.


What does the age of the game in years have to do with anything?

A fun game is a fun game.


Like many things they shine when use appropriately.


From my experience of building LFS it appears to be needed to built as part of the tool chain.

https://linuxfromscratch.org/lfs/view/development/chapter07/...


I could skip around on tapes relatively well on a walkman, you just had to remember the counters or roughly how long it was to rewind/fast forward. It wasn't that inconvenient. It just wasn't as quick as an mp3 player, CD or Winamp.


This is a go cart.


BREAKING: Nintendo sues Honda for infringing Mario Kart patent.


UPDATE: They settled out of court for an undisclosed number of giant yellow stars.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: