There is an interesting ethics discussion here, because FB could, for example, tune their algorithms to avoid polarization by presenting users with extreme views content that wouldn't validade those views, but show them a more moderate view and bring them to a more moderate, fact based, scientifically sound and politically correct view.
But would it be ethical? Who would be the one to choose what "the correct view" is?
Right now FB is doing harm because it is being somewhat fair by showing people what "they want to see", so their behaviors are telling the algorithms want they want, not a third party, even if what they want is wrong/toxic.