Social Media Users Should Guard Against Unwanted Persuasion

July 6, 2023

By Marilyn Broggi, PhD Candidate in Mass Communication at the University of Georgia, and member of Steamboat Institute’s Emerging Leaders Council.  Recent news about the Twitter Files, Hunter Biden’s laptop, and recent Twitter content limitations in Turkey have highlighted issues related to content moderation on social media. Since average users cannot directly control content moderation on social media platforms, I argue consumers should protect themselves against unwanted persuasion no matter what a platform decides to do with content. To do this, it is important to consider a few known ways information has historically spread on social media apart from content moderation: (1) in nodes, or groups, of users, (2) based on a user’s preferences, and (3) through native advertising placements. Information on social media has been shown to exist in nodes, where topics are primarily discussed within groups of users. A social network analysis visualization software called NodeXL can depict these nodes, and NodeXL offers a gallery of data visualizations for users to upload their findings. Browsing these network visualizations can show how certain topics arguably produce echo chambers of information. Additionally, information can be spread to users based on their interests. TikTok, for instance, spreads information to users by curating a personalized experience based on “user interactions,” “video information,” and “device and account settings” that indicate user preferences, according to “How TikTok recommends videos #ForYou.” Users can also be targeted with information through posts that are native advertising placements. Native ads on social media covertly blend with the format of surrounding content on a user’s feed (Wojdynski & Evans, 2020). A social media native ad could be a post paid for by a brand or advertiser that is formatted like a regular post and includes an advertising disclosure, such as “Sponsored.” Social media influencer posts about paid brand sponsorships using disclosures, like “#ad,” are also considered native ads (Evans et al., 2017). I argue it is possible that echo chambers, customized feeds, and covertly formatted ads potentially reduce a user’s psychological guard against persuasion attempts. This argument is rooted in the Persuasion Knowledge Model (Friestad & Wright, 1994), where people guard against persuasion attempts when they know someone is working to persuade them of something. Could users be so comfortable with the information being viewed that they let their guard down? With guards down, users could passively view persuasive content through the same lens as entertainment content. This has been shown in advertising research when advertising content is covertly disguised as entertainment content (Wojdynski & Evans, 2020). Research suggests an awareness of the persuasive nature of the content, specifically recognizing content as advertising, could influence the user’s attitudes, perceptions, and behaviors (Wojdynski & Evans, 2020). Thus, I believe users should keep their guard up to protect against unwanted persuasion. Here’s how: Pay attention. View social media in the morning or in smaller increments of time. Intentionally seek information in multiple ways. Know the policies of each social media platform to understand how content decisions are or are not made.

Register for Email Updates

Get the latest Steamboat Institute events, media, and updates emailed to you.

Name(Required)
This field is for validation purposes and should be left unchanged.

Support Us

Please consider making a tax-deductible donation to the Steamboat Institute. Your support helps us defend liberty and promote our nation’s first principles.

Donate Now