top of page
Search

Catch Up: Legal Approaches to the Risks of Digital Health Misinformation

Updated: 4 days ago

By Kemarah Thermidor '27


Part II out of III of an analysis of current legislation regarding health misinformation: To what extent should the legal reach of health misinformation address the legal consequences of health misinformation online?


The expectation of legal consequences for legislation surrounding health misinformation and the protection of the U.S. population against it has been established, but further expansion and clarification are needed due to technological advancements and cultural shifts. More pressingly, the prevalence of misinformation seems to run rampant online, with various studies and surveys revealing terrifying numbers relating to the way people treat serious illnesses. For instance, a 2013 study showed that 68% of apps aimed at the general public contained scientifically invalid information regarding cancer, along with videos on unproven stem cell treatments, which had 91% of patients alleging health improvements. (1) As of 2025, 96% of U.S. adults report using the internet, meaning it is highly likely that a majority of the U.S. population has encountered and might have believed online misinformation. (2) Considering technological advancements and vague legislation regarding health misinformation, it’s critical to examine the specificity of what legal consequences for online health misinformation might entail.


The legal field and online platforms, specifically social media applications, are not strangers to each other. Murthy v. Missouri placed the federal government under heat for alleged speech censorship. (3) Louisiana, Missouri, and five social media users alleged that federal officials coerced social media companies to censor the plaintiffs’ content or “speech,” something they argued to violate the First Amendment right to freedom of speech. (4) The plaintiffs’ arguments suggest that the federal government has the capacity to pressure social media platforms in the first place and that their influence on what content is allowed on social media platforms is unconstitutional. The Court ruled in favor of the government officials from the standpoint that the plaintiffs failed to prove governmental influence caused their content to be restricted and could not prove risk of future injury by the defendants’ actions. (5) Context is essential for this case, as the outcome holds relevance for the dissection of the extent to which the legal reach of consequences for health misinformation extends to online misinformation. 


In response to the 2020 COVID-19 outbreak, multiple social media platforms implemented policies that would dissuade users from spreading false or misleading information. (6) These policies were further reinforced during the 2020 election. (7) Plaintiffs in Murthy alleged that their content relating to COVID-19 and the 2020 election was restricted. The individuals chose to sue specific government officials based on their previous public appeals for platforms to go a step further and target vaccine misinformation in their content moderation policies, as well as COVID-19 and election-related misinformation. (8) The Court held that proving content restrictions were based on governmental pressure was improbable in the face of the platforms’ present policies regarding content moderation; the platforms’ policies aligned with their actions and did not suggest external pressure. Arguably, the most notable element of Murthy is the question of the extent to which federal agencies such as the Centers for Disease Control and Prevention or the Federal Bureau of Investigation can speak to the content that is permissible and spread on certain platforms, regardless of accuracy. 


Content moderation policies were a response to the potential harm that may be brought out by misinformation. With social media’s ability to disseminate information to billions of users in a matter of seconds, social media companies bear the responsibility and power to manage the distribution of users’ content. (9) However controversial, concerns regarding content moderation practices implemented by online platforms can be viewed as a unique positioning in understanding to what extent freedom of speech and expression is allowed when lives may potentially be at risk, as in the case of health misinformation. Congress has tried to speak to this concern with efforts to introduce legislation that incentivizes platforms to moderate and prevent harmful, misinformed, or “other objectionable” content. (10) This not only aligns with the previously established conclusion (see part I of III) that the U.S. Constitution creates an expectation of legal priority for public health, but also leaves room to account for the role that online platforms play in being a source for the distribution of misinformation.


If social media companies and Congress can acknowledge the obligation and capability these platforms have to mitigate online misinformation, especially health misinformation, does this mean harmful outcomes of the spread of online health misinformation should be the burden of these social media companies? U.S.C. § 35 requires the Knowledge Criterion (knowing the information to be false) and the Malice Criterion (willful or reckless disregard for the safety of human life) to be present to uphold legal consequences for the spread of misinformation. (11) Social media platforms have demonstrated themselves to be a potential risk for public exposure and harm from health misinformation to the point where companies have been legally incentivized to implement content moderation policies and practices to protect the general public. Therefore, it would seem the most plausible to maintain that social media companies should be legally obligated and responsible for the outcomes of health misinformation that happens on their platforms, but this “solution” requires more nuance. 


Endnotes 

  1. Briony Swire-Thompson & David Lazer, Public Health and Online Misinformation: Challenges and Recommendations, 41 Annual Review of Public Health 433 (2020).

  2. Pew Research Center, Internet/Broadband Fact Sheet, Pew Research Center: Internet, Science & Tech (2024), https://www.pewresearch.org/internet/fact-sheet/internet-broadband/.

  3. Murthy v. Missouri, Oyez, https://www.oyez.org/cases/2023/23-411.

  4. Murthy v. Missouri, 603 U.S. ___ (2024), Justia Law, https://supreme.justia.com/cases/federal/us/603/23-411/.

  5. Ibid.

  6. Ibid.

  7. Ibid.

  8. Ibid.

  9. Clare Y Cho & Ling Zhu, Social Media: Content Dissemination and Moderation Practices, Congress.gov (2025), https://www.congress.gov/crs-product/R46662.

  10. Ibid. 

  11. Cornell Law School, 18 U.S. Code § 35 - Imparting or conveying false information, LII / Legal Information Institute, https://www.law.cornell.edu/uscode/text/18/35

 
 
 

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
  • Instagram
  • LinkedIn

Florida Undergraduate Law Review 2024 | University of Florida

All opinions expressed herein are those of individual authors and are not endorsed by the Florida Undergraduate Law Review. The Florida Undergraduate Law Review is a student-run organization and does not reflect the views of the University of Florida.

bottom of page