> it's not true that it's designed to propagate misinformation
That was hyperbole on OP’s part.
The algorithm is designed to incentivise the creation and dissemination of attention-grabbing content without considering truthfulness. That simply and directly incentivises misinformation propagation. The fact that politically-disinterested troll groups routinely get incentivised to produce misinformation for clicks is Exhibit A to how deeply Facebook promotes these mechanisms.
Facebook and its employees turn a blind eye to these second-order effects because they are massively profitable. That’s as close to “designed to” as one can get without literally coding for it.
What Facebook's system does by design is to propagate and amplify simplistic, emotionally potent narratives. These tend to be fictional i.e. misinformation because reality is nuanced and boring. Further, this has been used by bad actors to seed social discontent and affect democratic elections. Facebook know exactly what is happening on their system and by whom.
The algoritthm is driven by engagement, wich is driven by misinformation, because that’s what people click more. So then a lot of more misinformation is written to feed this need. Clearly a lot of money is involved
I think it's possible. I know for certain there are techniques to identify "click-baity" text snippets. My partner is currently working on a data model now and she has already had some reasonable success. Imagine what a company that hires several 10s or even 100s of data scientists can achieve?
From there on, it's just a matter of highlighting these successful matches constantly in your feeds.
If the approach would be content-neutral - i.e. simply relying on the form of the information (eg misspellings in the title, many exclamation points, and such) to distinguish that which is more likely to be fake - then there could be a race condition where the misinformation purveyors learn and subvert the algorithm followed by the misinformation identifier incorporating the new forms of misinformation, and so on. In the meantime, true information purveyors would also need to be aware of this algorithm so as not to be falsely labeled. Think of the race in SEO for an example
If the misinformation identifier uses the content of the information to label misinformation, then the identifier itself is as open to bias and opinion as anyone else