Sure, you can ignore security if all you're doing is processing local text files, granted. But things looking horrible to programmers isn't just about security bugs, it's about the whole span of correctness bugs. And scientists need to write code that is both correct and maintainable. The frequency with which they don't is partly why results so often can't be replicated, making the money spent on academia wasted.
The idea that science code doesn't need to be maintainable or that they have some magic way to do it that looks wrong, isn't right either. It's not uncommon to find model "codes" that scientists have been hacking on for decades. The results have become completely untrustworthy many years earlier, but they deny/obfuscate/ignore, attack or even sue people who point out concrete problems. Sadly, often with the acquiescence of the media who are supposed to be ferreting out coverups.
Scientists need to collectively get a grip on this situation. They will happily attack anyone outside their institutions as being non-expert conspiracy theorists, but when it comes to software they suddenly know everything and don't need to hire professionals. Paper-invalidating bugs are constantly being covered up and the only reason the problem hasn't reached criticality yet is that many people don't want to hear about it. But the unreliability of academic output is now becoming a political problem and a divisive culture war issue, when it really shouldn't be. A good first step to solving the replication crisis would be for scientists to stop pretending it's OK to quickly knock together a program themselves instead of assigning a ticket to a trained full time SWE. Yes it would cost more (a lot more), and that's OK. Generate fewer papers but get them right!
Sure, you can ignore security if all you're doing is processing local text files, granted. But things looking horrible to programmers isn't just about security bugs, it's about the whole span of correctness bugs. And scientists need to write code that is both correct and maintainable. The frequency with which they don't is partly why results so often can't be replicated, making the money spent on academia wasted.
The idea that science code doesn't need to be maintainable or that they have some magic way to do it that looks wrong, isn't right either. It's not uncommon to find model "codes" that scientists have been hacking on for decades. The results have become completely untrustworthy many years earlier, but they deny/obfuscate/ignore, attack or even sue people who point out concrete problems. Sadly, often with the acquiescence of the media who are supposed to be ferreting out coverups.
Scientists need to collectively get a grip on this situation. They will happily attack anyone outside their institutions as being non-expert conspiracy theorists, but when it comes to software they suddenly know everything and don't need to hire professionals. Paper-invalidating bugs are constantly being covered up and the only reason the problem hasn't reached criticality yet is that many people don't want to hear about it. But the unreliability of academic output is now becoming a political problem and a divisive culture war issue, when it really shouldn't be. A good first step to solving the replication crisis would be for scientists to stop pretending it's OK to quickly knock together a program themselves instead of assigning a ticket to a trained full time SWE. Yes it would cost more (a lot more), and that's OK. Generate fewer papers but get them right!