Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If a report can be generated by an LLM and nobody cares about inaccuracies, why was it ever produced in the first place?


People read the summary to see the actual action items instead of reading the whole case. Now the action plan has constant random bullet points like “John Smith will add Mohammad to the email thread. Target date: Tuesday, July 20 2025. This will ensure all critical resources are engaged on the outage and reduce business impact.” or whatever because it’s literally summarizing every email rather than understanding the core of the work that needs doing.


It's not that they weren't useful, it's that someone higher up has to justify the expensive enterprise contract that they've foisted upon everyone else with the vague promise of saving money by using it.

The consumers of the incident report aren't the ones who had any say in using LLMs so they're stuck with less certainty.


OP mentions it directly in the post. they were "heavily encouraged" and then "they met their AI metrics"

now, this is wrong on so many levels. but that is a different discussion


Perverse incentives.


Cargo culting


It will be funny when one of those reports says that certain steps will be taken in the future to make sure the same incident doesn't occur again, nobody reads the report so nobody notices, and then when the same incident occurs again one of the clients sues.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: