Hacker Newsnew | past | comments | ask | show | jobs | submit | ZeroConcerns's commentslogin

Interesting choice of tourism destination, but quite cool (no pun intended...) regardless.

One of the most annoying things about working with anything metal at those temperatures, is that your tools will pretty much instantly become stuck to whatever it is you're trying to manipulate, making a propane burner an indispensable addition to your toolbox.


Ewan McGregor and his friend Charley Boorman was also in the area, 20 odd years ago, when they did a montorbike trip from London to New York, "The Long Way Round" (Crossing Europe, Mongolia, Russia, Canada and USA): https://youtu.be/6kajsHTy3hA

Man, looking at the map it feels like one of the last wild place on earth. I was wondering if this shipyard is on the Arctic Coast, but not really. If it were, it'll be relevant in the near near future. At the moment it's connected by a river to the Arctic Ocean, it's probably booming with business.


> The Long Way Round

It was a neat series, but the start where they whinge about not getting free bikes from their brand of choice was so incredibly entitled and such a turn off.


That's right. I wonder if the decision-makers at KTM regretted that afterwards.

Addendum: Considering that the GS has been a bestseller ever since. It feels like every other motorcycle enthusiast in Germany rides one. It has been the best-selling motorcycle almost every year since then. In Italy, many also seem to prefer riding GS bikes over Guzzi/Ducati/Aprilia.


Similar: During pandemic Ewan and Charlie did an electric bike ride from Argentina to the US and the support crew were in Rivians along with the Rivian CEO or head of engineering or something, as an extended QA run before full production. It was my introduction to the brand and sufficiently impressed me such that I think it’s the only option I’d look at for an EV.

What I remember from that show is that they plug in the Rivian to charge, and 12 hours later the charge level has increased by some miniscule amount. At some point they have to bring in a gas-powered support truck with a gas-powered generator to charge various electric vehicles. If anything, it was a commercial against EVs at the time.

It seems that EVs didn't make much sense in the environment of that trip (going through all of South America, where fast chargers were rare at the time)


Oh yeah. It’s very commonly accepted in ADV circles that the GS is THE bike of choice because of long way round.

It could, and probably should have been KTM. The GS is stupidly big and heavy.


They offered KTM a 10 hour advertisement series, which would go on to become a classic for motorcycling enthusiasts worldwide. KTM's response was "eh no you could never pull that off, and will make us look bad". It had nothing to do with the cost of the bikes.

I don’t think it was entitlement, but enthusiasm about brands (hobbyists tend to get that way).

He was coming off the high of being the "star" of the new Star Wars movies. He was a main character in the story but not The main character. I recall watching these on physical DVD via netflix in ~2008 and wondering why he seemed (what we now casually call) entitled; I'd been watching the series for ~3-4 episodes before it clicked with me that he was one of the actors from star wars, despite being a long time star wars fan. He was definitely entitled, the blow up was centered around KTM not being interested in what a Star Wars actor was doing and not taking him seriously. I distinctly recall seeing him cry, or almost cry on camera.

That said, ignoring that drama, the rest of the series was quite good, when they published "The long way Down" from Scotland to South Africa I jumped on that and watched it as well. Someone else pointed out they did an EV thing from Argentina to... Alaska? with Rivian, I might go look at that too.


And it really want's Ewan that was put out about the KTM rejection--he wanted to ride the BMWs, but Charley Boorman was pissed. Charley had dreamed of the KTMs for years.

You’re probably right. It seems far more likely to be left in of they were trying to show brand enthusiasm.

But given that McGregor has millions in the bank, he could have bought 3 KTMs and not even noticed the cost. Instead, they insisted that they will only ride bikes that someone gives them for free. Because the poor millionaire Hollywood actor "I was in a Star Wars movie!" couldn't possibly pay for his pet project out of his own pocket. Oh how unfair, those evil oppressors at KTM!

Interesting observation and I have to relate - today I've measured ice thickness with classic stainless caliper - -3 celsius was enough for it to immediately glue to ice it was even barely wet.

Working such temperatures must be real hazard to skin, anything metal will glue to it immediately.


Let the metal cool down to the the temperature of the ice and try again. The problem isn’t generally that the ice is sticky per se; the problem is that the surface of the ice will melt if something warm enough touches it and then will freeze again and stick.

no propane burners. propane freezes solid at minus 60°, and you need heaters to get any flow long before it gets that cold, to the point that you can set propane out in a bucket, which I have some experience with in useing it, to supper cool transmission shafts, so that they shrink, and press fit bearings slip right on. so yes they have propane, but they use it in other, less well known ways.

Propane freezes long before -60C.

The recent cold snap in the Yukon had smaller tanks useless just past -35c, and bigger ones not doing much past -40c.

We don’t take it on winter adventures for that reason.


I am not understanding this.

Propane does not freeze anywhere near -60C. Wikipedia [1] says it freezes (liquid to solid) below -187C and boils (liquid to gas) above -42C.

Propane is probably unusable as a fuel below -42C because there is no vapor leaving the tank [not within my experience]. That is different from the propane being a solid.

[1] https://en.wikipedia.org/wiki/Propane Melting point −187.7 °C Boiling point −42.25 to −42.04 °C


Maybe butane?

Butane stops vaporizing at -1C (31F), isobutane at about -10C (10F). Propane's boiling point is even better, at about -40C/-40F, but it self-cools and doesn't develop the required pressures to run a torch.

I know this because my otherwise dependable camp stove is a 3-season affair. For winter camping, you basically need a white gas system (liquid fueled, manually pressurized or gravity fed).

I suppose I'd reach for an acetylene torch in a cold workshop.


You're right. I misinterpreted my little butane torch's apparent high pressure in relation to my big propane torch.

Canned ethane or ethyne ("acetylene") then.


Yeah, actual title is "H-Neurons: On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs"

But regardless of title this is all highly dubious...


That's not media manipulation (social or otherwise). Even if you don't subscribe to the "if it bleeds, it leads" mantra (first commonly used in... the 1890s!), it's not that hard to understand why reporting on uncommon events is more popular than repeating the same baseline truths every day?


I sense a branding opportunity here... We already have "elevator music" to designate generic production-music-slop, so what will the next step be?

"OpenAI anything" seems a bit too much, yet "Claude somesuch" or "Grok horrifying" don't work either.

HN Branding Experts, what do you say?


Of course it’s vibe music!


So, yeah, I guess there's much confusion about what a 'managed database' actually is? Because for me, the table stakes are:

-Backups: the provider will push a full generic disaster-recovery backup of my database to an off-provider location at least daily, without the need for a maintenance window

-Optimization: index maintenance and storage optimization are performed automatically and transparently

-Multi-datacenter failover: my database will remain available even if part(s) of my provider are down, with a minimal data loss window (like, 30 seconds, 5 minutes, 15 minutes, depending on SLA and thus plan expenditure)

-Point-in-time backups are performed at an SLA-defined granularity and with a similar retention window, allowing me to access snapshots via a custom DSN, not affecting production access or performance in any way

-Slow-query analysis: notifying me of relevant performance bottlenecks before they bring down production

-Storage analysis: my plan allows for #GB of fast storage, #TB of slow storage: let me know when I'm forecast to run out of either in the next 3 billing cycles or so

Because, well, if anyone provides all of that for a monthly fee, the whole "self-hosting" argument goes out of the window quickly, right? And I say that as someone who absolutely adores self-hosting...


It's even worse when you start finding you're staffing specialized skills. You have the Postgres person, and they're not quite busy enough, but nobody else wants to do what they do. But then you have an issue while they're on vacation, and that's a problem. Now I have a critical service but with a bus factor problem. So now I staff two people who are now not very busy at all. One is a bit ambitious and is tired of being bored. So he's decided we need to implement something new in our Postgres to solve a problem we don't really have. Uh oh, it doesn't work so well, the two spend the next six months trying to work out the kinks with mixed success.


Slack is a necessary component in well functioning systems.


And rental/SaaS models often provide an extremely cost effective alternative to needing to have a lot of slack.

Corollary: rental/SaaS models provide that property in large part because their providers have lots of slack.


Of course! It should be included in the math when comparing in-housing Postgres vs using a managed service.


This would be a strange scenario because why would you keep these people employed? If someone doesn't want to do the job required, including servicing Postgres, then they wouldn't be with me any longer, I'll find someone who does.


No doubt. Reading this thread leads me to believe that almost no one wants to take responsibility for anything anymore, even hiring the right people. Why even hire someone who isn't going to take responsibility for their work and be part of a team? If an org is worried about the "bus factor" they are probably not hiring the right people and/or the org management has poor team building skills.


Exactly, I just don't understand the grandparent's point, why have a "Postgres person" at all? I hire an engineer who should be able to do it all, no wonder there's been a proliferation of full stack engineers over specialized ones.

And especially having worked in startups, I was expected to do many different things, from fixing infrastructure code one day to writing frontend code the next. If you're in a bigger company, maybe it's understandable to be specialized, but especially if you're at a company with only a few people, you must be willing to do the job, whatever it is.


Because working now at what used to be startup size, not having X Person leads to really bad technical debt problems as that person Handling X was not really skilled enough to be doing so but it was illusion of success. Those technical debt problems are causing us massive issues now and costing the business real money.


IMO, the reason to self-host your database is latency.

Yes, I'd say backups and analysis are table stakes for hiring it, and multi-datacenter failover is a relevant nice to have. But the reason to do it yourself is because it's literally impossible to get anything as good as you can build in somebody's else computer.


Yup, often orders of magnitude better.


If you set it up right, you can automate all this as well by self hosting. There is really nothing special about automating backups or multi region fail over.


But then you have to check that these mechanisms work regularly and manually


One thing I learned working in the industry, you have to check them when you're using AWS too.


Really? You're saying RDS backups can't be trusted?


Trusted in what sense, that they'll always work perfectly 100% of the time? No, therefore one must still check them from time to time, and it's really no different when self hosting, again, if you do it correctly.


What are some common ways that RDS backups fail to be restored?


Why are you asking me this? Are you trying to test whether I've actually used RDS before? I'm sure a quick search will find you the answer to your question.


No backup strategy can be blindly trusted. You must verify it, and also test that restores actually work.


Self-host things the boss won't call at 3 AM about: logs, traces, exceptions, internal apps, analytics. Don't self-host the database or major services.


Depending on your industry, logs can be very serious business.


Yugabyte open source covers a lot of this


Which providers do all of that?


I don't know which don't?

The default I've used on Amazon and GCP both do (RDS, Cloud SQL)


GCP Alloy DB


There should be no data loss window with a hosted database


Feom what I remember if AWS loses your data they are basically give you some credits and that's it.


That requires synchronous replication, which reduces availability and performance.


Why is that?


Well, Valve got seriously concerned about the Windows Store, like, a decade ago, since that could have reduced the stranglehold of Steam on the gaming marketplace.

Turns out that the usual Microsoft incompetence-and-ADHD have kind-of eliminated that threat all by itself.

Also: turns out that, if you put enough effort into it, Linux is actually a quite-usable gaming platform.

Still: are consumers better off today than in the PS2 era? I sort-of doubt it, but, yeah, alternate universes and everything...


I believe Valve's concerns went(or maybe go?) beyond just the Windows Store, and into "We believe Microsoft may become unable to ship a good Operating System in the future".

In a 2013 interview with Gabe Newell: "Windows 8 was like this giant sadness. It just hurts everybody in the PC business. Rather than everybody being all excited to go buy a new PC, buying new software to run on it, we’ve had a 20+ percent decline in PC sales — it’s like 'holy cow that’s not what the new generation of the operating system is supposed to do.' There’s supposed to be a 40 percent uptake, not a 20 percent decline, so that’s what really scares me. When I started using it I was like 'oh my god...' I find [Windows 8] unusable." [0]

The Windows Store probably was a part of it, sure, but looking at that quote from 2025, after having your SSD broken, your recovery unusable and your explorer laggy? It's quite bitter-sweet.

[0] https://archive.is/eBP6q#selection-3645.0-3645.729


Outside of XBox, Minecraft, and journalists trying it out, I don't think I've heard of anyone using the Microsoft store.

The Wikipedia page has quite the description of the view from within Microsoft:

> Phil Spencer, head of Microsoft's gaming division, has also opined that Microsoft Store "sucks". As a result, Office was removed as an installable app from the store, and made to redirect to its website.

https://en.wikipedia.org/wiki/Microsoft_Store


I actually use the Microsoft store before looking elsewhere for software. It’s basically a package manager with a minimal jank. It’s there on a new install and it works. It sucks that they don’t let you add other sources though.

Having an app from an exe installer sucks because you have to update it manually, or, it uses resources while you’re using it to check for updates. With the windows store I can update everything at once and don’t need a million individual update checks on startup.



Do you need a Microsoft account to use the store?


Can you use Windows without a Microsoft account?


Yes. At least in 11 Pro installs, you can just say you'll be joining to a domain, create a local admin account and never actually join it. Then to create other users you can do it via the command line, or probably through the GUI after telling it you don't want one a couple of times


Yes? I use my Windows 11 with a local account, no microsoft account involved.


Because games are on the XBox app store, not Microsoft store.


I tried to use Microsoft's Game Pass and the Xbox store on a Windows machine with multiple users.

It was astoundingly unusable for sharing Microsoft's own game within my own household with my own family members. Completely broken user experience.

It's not hard to believe that Steam was able to thrive because Microsoft has just done an amazingly bad job with this. I've been in software dev for 20 years and it still baffles me that companies with tens of thousands of engineers can produce such shitty software experiences.


It's not the engineers at fault here but C-suits. Those who are out of touch or stuck up their arse within their own world. Believing their own delusional vision is it based on that they have a toddler who's four.


I'd say engineers are at fault for bugs and performance issues, as well as poor UX (not counting what's made to sell you something or collect your data)


It can be the engineers issue, sure. Hire the wrong crew and you're sunk. However, while I might be bias, what do you do when the higher-execs don't give you time / space to fix the bugs, performance issues? No one writes genius code from day one of a project.

I've had to aggressively pitch to execs who've totally ignored the fact that $app is vulnerable. Would result in fines and if we optimized it could be pushed to milk further money offering X feature. I was denied because it's a waste of time, cost and "didn't provide anything for the company".

After finally persuading them and getting the classic response of "Oh!, why didn't you say so" three weeks later was fired for making the company waste money. This wasn't a small company in an industrial park.

Ever since I've turned down jobs that smell like toxicity. You can sort of see the companies stink when you enter reception.


> "Also: turns out that, if you put enough effort into it, Linux is actually a quite-usable gaming platform."

Valve is the one putting in the effort and paying for it at their own expense. If they ever lose interest in paying for it, like GabeN retiring and Ebenezer Scrooge replacing him, then it's game over for Linux gaming (literally).


> paying for it at their own expense

valve would recoup the cost from a bigger customer base, as well as paying it as insurance against windows/microsoft targeting them as an existential threat.

It's cheap for what they're getting. And iirc, it being open source means the foundation could be built upon by others if they do decide to call it quits.


That would make very little sense business wise. Steam “consoles” are not big break just for linux but also for valve. What could easily happen though is locking down their consoles once they get profitable.


"Sense business wise" seems to vary quite a bit nowadays, at least every other day there's a headline of a company on here doing something almost exclusively for short-term value at the detriment to long-term health.


> Well, Valve got seriously concerned about the Windows Store, like, a decade ago, since that could have reduced the stranglehold of Steam on the gaming marketplace

Microsoft telegraphed its intention to kill Steam. The plan was a hermetically sealed ecosystem where only cryptographically signed code could run on Windows computers, from UEFI boot to application launch. This meant users would only run software Microsoft let them, and there was no room for the Steam store in Microsoft's vision of the future then.


Not just usable, better performant than Windows too.

https://arstechnica.com/gaming/2025/06/games-run-faster-on-s...


It really depends on your GPU drivers.

If you're using nvidia like 75% of Steam's hardware survey reports, it's a mixed bag and 1% lows are fucking abysmal compared to windows.

But try getting nvidia to care about Linux beyond CUDA. They'd rather just stop selling GPUs to normal people before they do that.


They're in the gradual process of open-sourcing their driver stack by moving the bits they want to keep proprietary into the firmware and hardware, much like AMD did many years ago.

It takes a long time to become mature, but it's a good strategy. NVIDIA GPUs will probably have pretty usable open-source community drivers in 5 years or so.


Why do you link that article but not this one?

https://arstechnica.com/gaming/2015/11/ars-benchmarks-show-s...


becuase the one he linked is from this year, the one you linked is from 10 years ago



And yet, not much has changed in that decade, right? Well, other than the Steam Deck, which is a well-defined set of hardware for a specific purpose, and which is the main driver for Linux game compatibility...

And that's great! But for a random owner of random hardware. the experience is, well... same as it ever was?


The experience on random hardware in 2025 is nowhere close to what is was in 2015. Have you tried it recently? In 2025 I can install pretty much any game from Steam on my Linux desktop with an nvidia gpu and it just works. The experience is identical to Windows.

The 2015 experience was nothing like this, you'd be lucky to get a game running crash-free after lots of manual setup and tweaking. Getting similar performance as Windows was just impossible.


> But for a random owner of random hardware. the experience is, well... same as it ever was?

Far from it... the only area you tend to see much issue with a current Linux distro is a few wifi/bt and ethernet chips that don't have good Linux support. Most hardware works just fine. I've installed Pop on a number of laptops and desktops this past year and only had a couple issues (wifi/bt, and ethernet) in those cases it's either installing a proprietary driver or swapping the card with one that works.

Steam has been pretty great this past year as well, especially since Kernel 6.16, it's just been solid AF. I know people with similar experience with Fedora variants.

I think the Steam Deck's success with Proton and what that means for Linux all around is probably responsible for at least half of those who have tried/converted to Linux the past couple years. By some metrics as much as 3-5% in some markets, which small is still a massive number of people. 3-5 Million regular users of Desktop Linux in the US alone. That's massive potential. And with the groundwork for Flatpak and Proton that has been taken, there's definitely some opportunity for early movers in more productivity software groups, not just open-source.


The difference from 2015 to 2025 is enormous.

Gaming on linux in 2015 was a giant pita and most recent games didn't work properly or didn't work at all through wine.

In 2025 I just buy games on steam blindly because I know they'll work, except for a handful of multiplayer titles that use unsupported kernel level anticheat.


>And yet, not much has changed in that decade, right?

the performance difference between SteamOS and Windows did

>Well, other than the Steam Deck, which is a well-defined set of hardware for a specific purpose, and which is the main driver for Linux game compatibility... >And that's great! But for a random owner of random hardware. the experience is, well... same as it ever was?

the 2025 ars technica benchmark was performed on a Legion Go S, not on a steam deck


I'm all for MS bashing and laughing at their incompetence, but was there really any threat there? I don't know anyone on PC who was interested in buying a game anywhere other than Steam in 2015.


It was specifically the release of Windows RT (Windows 8 on ARM) in 2012 that had people nervous that Microsoft wanted to lock Windows down long-term in the manner of iOS and Apple. Windows RT only ran code signed by Microsoft and only installed programs from Microsoft's store. It failed, and Microsoft let off the gas locking down Windows, but that moment was probably the specific impetus for Gabe Newell to set Valve on a decade long course of building support for Steam and the games in its storefront on Linux. Windows being locked down to the degree of iOS was an existential risk to Valve as a company and Steam as a platform in 2012. It isn't anymore.

Windows RT also drew ire from people other than Newell at the time IIRC. It was widely perceived as a trial balloon for closing down Windows almost completely. The first Steam Machines a decade ago were Valve's answering trial balloon. Both failed, but Valve learned and Microsoft largely did not... They haven't locked down Windows 11 to the point of Windows RT, but they're abusing their users to the point of potentially sabotaging their own market dominance for consumer PCs.


Yes. We could have had Windows on Arm ten years previously, but Microsoft tried to use the platform transition as an opportunity for lock in. Fortunately this meant there were no apps and basically zero take up of WinRT.


This was also closely related to their initial plans with the Xbox One to essentially kill used games, which they were about a decade to fast on rolling out.


I buy games on GoG when I can, Steam when I have to. I have nothing against Steam, but they do have a near monopoly position on PC. Unfortunately the non-GoG alternatives are from even worse actors.


People feared that MS will make installing things not from the store harder. Like what apple is doing. It posed a serious potential threat. Given that MS had complete control over the Windows, DirectX and many other tools developers were using.


Sure, if they pulled Apple and locked everyone into only installing from Microsoft Store, Steam would have been in serious trouble.


They still can be, Microsoft is one of the biggest publishers, and they can lock everything from their studios into XBox app store or Gamepass, if they feel like it.


I never installed Steam, nor I intend to.


How many games have you bought in 2015?


Enough, on physical computer stores selling those little shiny things called DVDs.

As for how many, that was 10 years ago, I hardly can remember everything I ate last week.


> Well, Valve got seriously concerned about the Windows Store, like, a decade ago...

Yeah, I briefly addressed that concern in the article as a comparison to Facebook; probably could've expanded on it, but it was already quite long and didn't feel like it fit naturally into the topic at hand


That wasn't meant as a criticism, more like some additional context. With how irrelevant the Microsoft Store is these days, I can't blame anyone for skipping over it...


Ah, okay, gotcha!


It is clearly not, as long as it depends on running Windows games, developed on Windows, running on Proton.

It is like arguing Windows is a quite-usable UNIX platform thanks to WSL 2.0.

The right way to push for Linux gaming is how Loki Entertainment was doing it.


I'm impressed they even managed to create a game subscription that works on both PC and Xbox. It felt too much like Xbox was made by a different company than Windows for a long time. Remember Games for Windows Live?


ADHD? What do you mean?


They probably mean that with some projects, Microsoft builds something half-assed and then moves on to something else. Instead of sticking with the project, evaluating it critically, and committing to fixing whatever sucks about it until it's high quality.

One could get that impression from the Windows Store/Microsoft Store. And also the state of the Settings UI for at least the past 13 years - Windows 8 moved a small fraction of Settings to Metro design, but 13 years later there are still some pieces of Windows 7 UI left.

Or the Edge browser fiasco - how can a company as large as Microsoft conclude "eh, I guess we just can't have a browser that works well enough for enough of the web to be competitive, let's just give up and do a Chrome branch"

Or the Kin phone: "we launched this 4 weeks ago and I guess it sucks, let's just pull the plug and never mention this again"

Or Windows features like home group, libraries, and Windows Home Server - they're around for a few years, then someone decides "we don't really care about this" and dump them.


LLMs know nothing about Unpoly, and quite a bit about htmx. This requires you to actually learn Unpoly, because, well, even pointing your LLM-of-choice at the Unpoly docs (which are quite okay!) makes it regress-to-the-ugly-Javascript-workarounds-mean pretty much on try #1.

I'm not yet sure whether this is a good thing or not -- I'll let you know once my latest iteration of my web framework is finally working as I envisioned, he-said sort-of-jokingly, which should be Soon Now.

But yeah, either alternative still beats React by a country mile, since everything related to that descends into madness right away.


I don’t think there is anything in unpoly that a good llm couldn’t figure out with a look over the docs pretty quickly. It’s pretty simple and has some great functionality, especially if you are shooting for progressive enhancement.


Well, I actually use Unpoly, and I can assure you that LLMs don't get it, no matter how many pointers to the (excellent!) docs one includes.

Like, even just now, Claude Code with Opus 4-dot-latest, is absolutely convinced you need a bunch of fragile cascading Javascript listeners to dismiss a lower-level menu in case a dialog is opened, while the Unpoly docs, correctly and clearly, point out that 'shatter' exists for just that purpose.

And this is one of the use cases that I continue to highlight as the achilles heel of LLMs. I'm not holding it wrong: they're not reading it right.


ah, then I defer to your experience, i hope that it improves: unpoly is an excellent library


I have the same problem. I guess we will just have to train our own SLM with a carefully selected (unpolluted) training corpus.


The 'recent graduates' quoted in this article all seem to be from (for lack of a better description) 'developing countries' hoping to get a (again, generalizing) 'high-paying FAANG job'.

My initial reaction would be that these people, unfortunately, got scammed, and that the scammers-promising-abundant-high-paying-jobs have now found a convenient scapegoat?

AI has done nothing so far to reduce the backlog of junior developer positions from where I can see, but, yeah, that's all in "Europoor" and "EU residency required" territory, so what do I know...


For the last few decades its been offshoring that filled the management agenda in the way AI does today so it doesn't seem surprising to me that the first gap would be in the places you might offshore a testing department to, etc.


Offshoring has the exact same benefits/problems that AI has (i.e: it's cheap, yet you have to specify everything in excruciating detail) and has not been a significant factor in junior hiring, like, ever, in my experience.


My experience is that it is not a reduction in work in the place being offshored, but it changes the shape of the labor market and certainly in the places being offshored to. Replace offshore with something cheaper and a lot of juniors in top offshore locations are the quickest to feel it. Local juniors might be worth hiring again if they need a lot of oversight once agents make them questionably productive.


Currently helping with hiring and can't help but reflect on how it changed over past couple of years. We are now filtering for much stronger candidates across all experience levels, but junior side of the scale had been affected much more. Where previously we would take top 5% of junior applicants that made it past first phone screen, now it's below 2%.


> AI has done nothing so far to reduce the backlog of junior developer positions from where I can see

Job openings for graduates are significantly down in at least one developed nation: https://www.theguardian.com/money/2025/jun/25/uk-university-...


"This article was amended on 26 June 2025 to clarify that the link between AI and the decline in graduate jobs is something suggested by analysts, rather than documented by statistics"

Plus, that decline seems specious anyway (as in: just-about visible when you only observe the top-5% of the chart), plus, the UK job market has always been very different from the EU-they-left-behind.


Am I reading this article correctly: the job market was worse in 2017?

Was Ai also responsible for that market? This seems a bit unsupported.


Consider what happened in the UK in 2016.


And, as usual, no mention of the massive shortsighted overhiring during the post-covid bull market.


Again, in my experience, that simply never happened, at least not with regard to junior positions.

During COVID we were struggling to retain good developers that just couldn't deal with the full-remote situation[1], and afterwards, there was a lull in recent graduates.

Again, this is from a EU perspective.

[1] While others absolutely thrived, and, yeah, we left them alone after the pandemic restrictions ended...


Huh. It sounds like your perspective isn't just EU focused but N=1, based solely on your company.

The post-pandemic tech hiring boom was well documented both at the time and retrospectively. Lots of resources on it available with a quick web search.


I never claimed a broad perspective. But I've yet to see a "post-pandemic hiring boom" anywhere in junior-level-IT jobs in the EU, and a quick trip to Google with those exact words turned up nothing either.

So, please elaborate?


Anything that prices spammers out of abusing GitHub actions is a win in my book...


Maybe it's a lack of imagination on my part, but how do spammers abuse self-hosted runners?


Form submission spam. Unique/'untraceable' IPs...


How do they abuse self hosted runners?


Malware in build scripts/dependencies. That's not exclusively credential/crypto-stealers, there's apparently also a healthy demand for various types of spam straight from corpo gateways...


Yes, but they’re self hosted


Well, if Apple were really clever, they'd have introduced an 'EU DMA CAPTCHA' by now, requiring anyone EU-adjacent-resident to mark all the evil EU bureaucrats in a picture of room before allowing them to resume their doomscrolling.

I mean, it absolutely worked for effectively sinking the GDPR, where pretty much everyone now equates that law with obnoxious 'cookie banners', to the point that these regulations are being relaxed, despite never requiring these banners in any way, shape or form in the first place.

But, yeah, despite that, I'd say they'll get away with this as well...


> I mean, it absolutely worked for effectively sinking the DMCA, where pretty much everyone now equites that law with obnoxious 'cookie banners', to the point that these regulations are being relaxed.

I don't think DMCA has anything to do with that though I did wish everyone hated it. You probably meant GDPR.


Yes, indeed, thank you! Fixed now, but, well... abbreviation fatigue takes its toll...


Even with the fix that's still not really accurate - though it's a widespread misconception.

https://en.wikipedia.org/wiki/EPrivacy_Directive


ePrivacy Directive doesn't require those obnoxious banners either


It doesn't require consent for cookies or similar data that are strictly necessary to do what the user has asked for - a token for logging in or the contents of a shopping cart are the two canonical examples.

It certainly does require informed consent in other situations though and the dreaded cookie banners were the industry's attempt to interpret that legal requirement.


No, it's now entirely accurate. Nothing in the GDPR requires 'cookie banners', and your Wikipedia link doesn't 'dispell' that 'misconception', but nice try...


My point is that it was never the GDPR that required any sort of "cookie banner" in the first place.

The cookie banner requirement is itself a widespread misconception because the actual rule is neither specific to cookies (it would also cover other locally stored data) nor universal (for example it doesn't require specific consent for locally storing necessary data like session/login mechanics or the contents of a shopping basket).

The requirements for consent that do exist originate in the ePrivacy Directive. That directive was supposed to be superseded by a later ePrivacy Regulation that would have been lex specialis to the GDPR - possibly the only actual link between any of the EU law around cookies and the GDPR - but in the end that regulation was never passed and it was formally abandoned earlier this year.

So for now rules about user consent for local data storage in the EU - and largely still in the UK - do exist but they derive from the ePrivacy Directive and they are widely misunderstood. And while there has been a lot of talk about changes to EU law that might improve the situation with the banners so far talk is all it has been.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: