> we should measure whether in fact it does more good than harm or not
The demonstrable harms include assisting suicide, there's is no way to ethically continue the measurement because continuing the measurements in their current form will with certainty result in further deaths.
Thank you! On top of that, it’s hard to measure “potential suicides averted,” and comparing that with “actual suicides caused/assisted with” would be incommnsurable.
And working to set a threshold for what we would consider acceptable? No thanks
If you pull the lever, some people on this track will die (by sucide). If you don't pull the lever, some people will still die from suicide. By not pulling the lever, and simply banning discussion of suicide entirely, your company gets to avoid a huge PR disaster, and you get more money because line go up. If you pull the lever and let people talk about suicide on your platform, you may avoid prevent some suicides, but you can never discuss that with the press, your company gets bad PR, and everyone will believe you're a murderer. Plus, line go down and you make less money while other companies make money off of selling AI therapy apps.
....but if you pull the lever and let people talk about suicide on your platform, your platform will actively contribute to some unknowable number of suicides.
There is, at this time, no way to determine how the number it would contribute to would compare to the number it would prevent.
The demonstrable harms include assisting suicide, there's is no way to ethically continue the measurement because continuing the measurements in their current form will with certainty result in further deaths.