top of page
  • Writer's pictureMike McCormick

Popping Information Bubbles


News, opinion, and even science are increasingly exchanged within echo chambers that can distort objective facts. Emerging liberal and conservative information ecosystems reshape reality to support dominant narratives; i.e. propaganda.


This split reality impacts more than politics; it also affects public health, safety, and security. This became clear in 2020 as differing narratives about covid-19 impacted behaviors like mask wearing, competing narratives about the presidential election undermined public trust in democracy, and opposing narratives about social unrest conjured dueling bugaboos such as Antifa versus Proud Boys.


Information bubbles also affect cybersecurity. For example, Russian interference in the 2016 election was viewed differently in Red America than in Blue America. More recently, news about Russian hacking of US agencies and companies (SolarWinds attack) was similarly polarized. Our adversaries encourage these splits and exploit them to weaken America’s response.


Social networks aren’t the only cause of this problem, but platforms like Facebook and Twitter do foster so-called filter bubbles because their algorithms elevate content and friends the user agrees with. Each time we tap the “Like” button, we teach the algorithm our preferences and biases, so the system quickly learns which echo chambers we belong to. The result is self-sorting. Newer platforms like Parler explicitly cater to a single ecosystem, reinforcing bubbles even more.


Of course, the problem predates social networks. Cable television, with its focus on narrowcasting, builds small but loyal audiences around niche channels. Fox News exploited this to create a conservative news outlet, but were soon joined by the liberal MSNBC. More neutral news programs on the old broadcast networks have lost viewers because opinionated news is more entertaining and caters to viewer bias.


Whether online or on TV, it feels good to have authority figures confirm what we already wish to believe. We feel validated. Opposing viewpoints can feel threatening. But the only way out of America’s polarization predicament is to listen to opposing points of view, even if we don’t like them. We have to eat our vegetables along with our red meat.


Societal ills can’t be fixed through technology alone. But since this is a tech column, here are a half dozen technology changes that could reduce media polarization:


1. Automate fact checking and post moderation. Today social networks rely heavily on humans to report objectionable content, moderate posts, and label those with “disputed” facts. But humans can’t keep up with the volume or velocity of large networks. Much of this work could be automated by teaching an AI to watch human moderators and learn the relevant patterns. Such AI wouldn’t replace humans, but it can act as a force multiplier, accelerating and amplifying their work to prevent disinformation going viral.


2. Balance posts that promote a biased narrative, perhaps by focusing on posts that go viral within one information bubble but don’t cross over much to other bubbles. Pair them with posts that offer an opposing point of view, as a sort of Fairness Doctrine.


3. Perform contact tracing. When biased or false information goes viral, automatically follow sharing chains back to the original source. This is analogous to contact tracing in an epidemic to identify “patient zero.” Once a source is known, monitor it to ensure it’s not a bot, foreign adversary, or unscrupulous propagandist.


4. Mix opposing voices. If content in someone’s news feed is largely biased and mostly comes from sources in one bubble, sprinkle in content from other sources or people. We all need to hear from diverse perspectives, including some we don’t agree with.


5. Lock down APIs and thwart bots. The ability to inject automated posts into social networks should be restricted, throttled at higher volumes, and monitored for content. Those who abuse APIs should lose access to them. Bots should be banned.


6. Regulate persuasion tech. Bruce Schneier and Alicia Wanless wrote a good article about this. Persuasion technology analyzes user behavior in order to personalize targeted messages designed to persuade them to adopt a belief or narrative. These technologies began as targeted advertising tools, but evolved into propaganda weapons. Their use should be regulated, primarily by giving away less personal private information about users.


These are just technology tweaks. The most important thing we need is education, teaching people from a young age how to think critically, detect misinformation, and recognize personal belief bias.


Popping information bubbles won’t magically end our political or cultural disagreements, but it would restore some balance and civility, and make us less vulnerable to manipulation by our enemies.

 

Michael McCormick is an information security consultant, researcher, and founder of Taproot Security.


bottom of page