Hacker Newsnew | past | comments | ask | show | jobs | submit | MattIPv4's commentslogin

Again... Unicorns when trying to view files or PRs, errors trying to leave comments or review things if they do load.

Looks like they've got a status page up now for PRs, separate from the earlier notifications one: https://www.githubstatus.com/incidents/smf24rvl67v9

Edit: Now acknowledging issues across GitHub as a whole, not just PRs.


Status page currently says the only issue is notification delays, but I have been getting a lot of Unicorn pages while trying to access PRs.

Edit: Looks like they've got a status page up now for PRs, separate from the earlier notifications one: https://www.githubstatus.com/incidents/smf24rvl67v9

Edit: Now acknowledging issues across GitHub as a whole, not just PRs.


They added the following entry:

Investigating - We are investigating reports of impacted performance for some GitHub services. Feb 09, 2026 - 15:54 UTC

But I saw it appear just a few minutes ago, it wasn't there at 16:10 UTC.


And just now:

Investigating - We are investigating reports of degraded performance for Pull Requests Feb 09, 2026 - 16:19 UTC


Yeah I've been seeing a lot of 500 errors myself, latency seems to have spiked too: https://github.onlineornot.com/

I cannot approve PRs because the JSON API is returning HTML error pages. Something is really hosed over there.

Yep, trying to access commit details is just returning the unicorn page for me

git operations are down too.

https://github.com/google-gemini/gemini-cli/issues/16723 is even worse, GitHub shows `5195 remaining items` in the collapsed timeline.


Wow. If you look at all the issues this seems pretty common

https://github.com/google-gemini/gemini-cli/issues?q=is%3Ais...


Wow that's whole a lot of yapping


Leaked data includes:

  - Name, Discord username, email + other contact details.
  - Limited payment information, including type + last four of cards.
  - IP addresses.
  - Messages and attachments sent via the customer service system.
  - Notably, any government ID provided to Discord.


While I do now have a CS degree, I was hired into a big tech company prior to having that degree based on my work in open-source. I'd been (and still am) an active contributor to some large web projects for a few years, prior to this company reaching out to hire me (projects were unrelated to what they do, FWIW).

I did then get my degree in my spare time while at that job, with my salary covering the costs of tuition, which was very nice. I see my degree as a backup, if for any reason an employer has a strict requirement for it or if I leave this field -- I expect my experience to be what gets me jobs as it stands today.


Out of curiosity, did you learn much by pursuing the degree?


No, the content felt incredibly outdated and I can’t say I gained any technical knowledge from it. The only skill it really taught me was how to write effective filler content to boost the word count on something.


PoE is around 15w at 48v, PoE+ is 30w, and PoE++ is 60 or 100w


Likely due to a Cloudflare incident: https://www.cloudflarestatus.com/incidents/8m177km4m1c9


We process around 1 million events a day using a queue like this in Postgres, and have processed over 400 million events since the system this is used in went live. Only issue we've had was slow queries due to the table size, as we keep an archive of all the events processed, but some scheduled vacuums every so often kept that under control.


Partial indexes might help.


Also: an ordered index that matches the ordering clause in your job-grabbing query. This is useful if you have lots of pending jobs.


Exactly. A partial index should make things fly here.


Active Queue table and then archive jobs to a JobDone table? I do that. Queue table is small but archive goes back many months


In modern PG you can use partitioned table for a similar effect.


We just have a single table, with a column indicating if the job has been taken by a worker or not. Probably could get a bit more performance out of it by splitting into two tables, but it works as it is for now.


Why is GitHub the one under fire here? Users on GitHub are using GitHub Actions to build CI pipelines that build stuff, and happen to be pulling from GMP. That's not GitHub's problem that users are using their product in a legitimate manner, it seems to me it is GMP's problem that they can't handle traffic for artifacts from CI systems? It is noted the requests are identical, so would a modicum of caching in front of their origin not make this problem go away completely?


CI systems shouldn't have the ability to make network requests at all, honestly.


If all CI systems of the world went down, it would have cooled Earth by 0.001°C.


How would you suggest they install dependencies then?


At the very least, that's an issue that GMP or other projects shouldn't have to worry about. There are many options - you could manually cache things somewhere or pack in the dependencies in the repo. Or, maybe, in a world that wasn't completely set on wasting all resources possible, there just wouldn't be pointless automatic builds on forks, and those builds wouldn't need to re-download the world and could instead just incrementally update. (yes, there are nice consequences of doing fresh builds always, but there are also bad ones as can be seen, and unfortunately the downsides aren't seen by the initiator)


Why should this host (and presumably every similar host) take on the burden of this extra complexity?

Would a modicum of caching in GitHub Actions libraries not make this problem go away for all hosts in this category?


That's fair, I would agree that caching at either end would fix this. It just strikes me as odd that GitHub, the middle-man that's just providing CI runners, is the one under fire.


What GitHub is effectively doing is providing free DDoS hardware and lots of it, as far as the receiving end is concerned. I don't think GitHub should particularly be "under fire" for this, but it's still very not nice to provide a service that, under legitimate use (never mind illegitimate use!), can make unreasonable amounts of traffic to arbitrary sites.

I think a quite reasonable expectation from GitHub would be to have an all-of-GitHub-wide rate limit that CI can use for requests to any given site, and have jobs fail/delay if GitHub has exceeded that, and expect sites to explicitly opt in if they're fine with more than that rate. Would of course very much suck for GitHub CI users that want to pull from sites not opted in, but at least GitHub would stop offering free DDoS services.


For the same reason that ISPs tend to come under fire when their customers are using MTAs to deliver large volumes of e-mails to non-consenting recipients.

Are you saying it's not an ISP's problem that spammers are using their product in a legitimate manner, but instead it's up to the recipients to build their own spam fighting resources? Yes, that turned out wonderfully.


Is it the phone companies' fault that people make death threats over the phone? Do we say 'Phone company makes death threats' when that happens like the title is saying about GitHub?


Death threats? WTF? What metaphor are you using that git clone requests are now suddenly death threats?

Anyway, I don't think your example aligns with the argument you think you're making: https://www.fcc.gov/enforcement/areas/unwanted-communication...

Yes, phone companies definitely can be liable for bulk targeting by their users.


> What metaphor are you using that git clone requests are now suddenly death threats?

The headline and title are saying GitHub DDoSed a crucial open source site, so the metaphor is definitely valid. I did not compare DDoS to death threats, I compared the fact that it is being said that GitHub DDoSed the open source site and did a thought experiment to see what happens if something similar was said about the phone companies.


If it must go to internet, a MITM SSL proxy cache out of GH would help.

The problem is within GH's network, a 90s ISP would have blocked spammy users, GH should at least operate like an ISP if random stuff can execute and reach the 'net.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: