> The “inclusiveness checker” highlights phrases or words the firm believes could offend someone based on their gender, age, ethnicity or sexual orientation.
I don’t think words can intrinsically have the property of “being offensive”, it can only be that particular words offend a particular person. Assume Microsoft had self-reports from every human about which words offend them. How would they turn this data into The Bad Word List? Do they just count the number of people who find each word offensive then put the top N most reported words on the list?
Would I use the algorithm if it told me which words offend the particular recipient of my email or Slack message? Usually when someone reproaches me for saying “something offensive”, they are defending some hypothetical other who they say would be offended rather than reporting that they themselves are offended. So maybe I want the algorithm to tell me which words my email recipient thinks offend other people. Or I’ll just say whatever the fuck I want and deal with the consequences.
Coming soon to teams will be a bias score for contacts, your score will be reduced for disabling the features of Word. Further when you ignore a suggestion your score will suffer penalty for violating the "Verbal Morality Statute". Blocking users with low scores is optional for now.
I'm not a native speaker. I need a linter, not just a spell checker. I need a hint when I mix UK and US spelling and expressions, or when I use overly formal grammar and outmoded expressions.
To me, inclusivity is just an extra hint to add to the linter. You can choose to ignore it as I do other hints in PyFlake or jslint. The AirBnB JS rules made me a better programmer. Perhaps a similar tool could make me a better writer.
Whether or not you enjoy this aspect of our modern age, it's fair that a modern tool offers solutions for this age. Deliberately excluding a useful feature is as much of a statement as including it.
Next, I'd like to see more tools that encourage plain, unambiguous writing (like Hemingway), and rid us of the notion that business should sound fancy and elaborate. If more communication had the tone of the NHS website, the world would be a better place.
I don’t think words can intrinsically have the property of “being offensive”, it can only be that particular words offend a particular person. Assume Microsoft had self-reports from every human about which words offend them. How would they turn this data into The Bad Word List? Do they just count the number of people who find each word offensive then put the top N most reported words on the list?
Would I use the algorithm if it told me which words offend the particular recipient of my email or Slack message? Usually when someone reproaches me for saying “something offensive”, they are defending some hypothetical other who they say would be offended rather than reporting that they themselves are offended. So maybe I want the algorithm to tell me which words my email recipient thinks offend other people. Or I’ll just say whatever the fuck I want and deal with the consequences.