Late News

Why Facebook’s political-ad ban is taking on the wrong problem

The bigger problem 

But the true strength of online political advertising has been in sowing discord. Social-media networks function by running powerful content recommendation algorithms that are known to put people in echo chambers of narrow information and have at times been gamed by powerful actors. Instead of getting voters to switch their position, political messages delivered this way are actually much more effective at fragmenting public opinion. They don’t persuade voters to change their behavior as much as they reinforce the beliefs of already-decided voters, often pushing them into a more extreme position than before. That means the ads being banned—the ones from the campaigns—are not what is changing democracy; it’s the recommendation algorithms themselves that increase the polarization and decrease the civility of the electorate. 

Sam Woolley, the project director for propaganda research at the Center for Media Engagement at the University of Texas, says that while he’s “glad that Facebook is making moves to get rid of political ads,” he wonders “to what extent the social-media firms are going to continue to take small steps when they really need to be addressing a problem that is ecosystem-wide.”  

“Political ads are just the tip of the iceberg,” he says. “Social media has horrendously exacerbated polarization and splintering because it has allowed people to become more siloed and less civil because they’re not engaging as much in face-to-face communications, because they’re behind a wall of anonymity and because they don’t really see consequences for the things they do.” These algorithms may seem mathematical and objective, but Woolley says the system is “incredibly subjective,” with many human decisions behind how and why particular content gets recommended. 

So Facebook’s ban ahead of November 3 won’t do much to change voter behavior. Indeed, since Facebook’s algorithms give more weight to posts