Forty-one states and the District of Columbia are suing Meta, alleging that the tech giant harms children by building addictive features into Instagram and Facebook. Tuesday’s legal actions represent the most significant effort by state enforcers to rein in the impact of social media on children’s mental health.
The barrage of lawsuits is the culmination of a sprawling 2021 investigation into claims that Meta contributes to mental health issues among young people.
“Our bipartisan investigation has arrived at a solemn conclusion: Meta has been harming our children and teens, cultivating addiction to boost corporate profits,” California Attorney General Rob Bonta (D), one of the officials leading the effort, said in a statement.
Thirty-three states including Colorado and California are filing a joint lawsuit in federal court in the Northern District of California, while attorneys general for D.C. and eight states are filing separate complaints in federal, state or local courts.
While the exact scope of the legal claims may vary, they are expected to paint a similar picture: The company has hooked kids on its platforms using harmful and manipulative tactics.
The 233-page federal complaint alleges that the company engaged in a “scheme to exploit young users for profit” by misleading users about its safety features and the prevalence of harmful content on its products, harvesting data from younger users and violating federal laws on children’s privacy. State officials claim that the company knowingly deployed changes to keep kids on the site to the detriment of their well-being.
The complaints underscore the groundswell of concern among government leaders that major social networks risk the well-being of their younger users by designing their products in ways that optimize for engagement over safety.
Meta spokesperson Liza Crenshaw said in a statement that the company is “disappointed that instead of working productively with companies across the industry to create clear, age-appropriate standards for the many apps teens use, the attorneys general have chosen this path.”
The effect of Meta’s products on young people was thrust into the national spotlight after a 2021 Wall Street Journal report detailed internal research, leaked by Facebook whistleblower Frances Haugen, showing that Instagram made some teen girls’ body issues worse.
The revelations ushered in a political reckoning in Washington and in state capitals across the country, with legislators launching fresh efforts to restrict children’s social media use and regulators renewing scrutiny of Meta’s safety practices.
But efforts to pass new privacy and safety protections for kids online have languished at the federal level, largely leaving states to forge ahead with aggressive new measures.
States such as Arkansas and Utah have passed laws banning kids younger than 13 from social media and requiring teens younger than 18 to get parental consent to access the sites. California, meanwhile, passed rules requiring tech companies to vet their products for risks and build safety and privacy guardrails into its tools. In lieu of federal legislation, parents and school districts have also taken up the matter, filing lawsuits accusing Meta, TikTok and other platforms of worsening the nation’s youth mental health crisis and deepening anxiety, depression and body image issues among students.
The mounting legal cases arrive at a time when the research about the connection between social media usage and mental health problems remains murky. Earlier this year, U.S. Surgeon General Vivek H. Murthy released an advisory arguing that excessive social media use as a child may lead to a higher risk of poor mental health including sleep problems or body dissatisfaction. But a report by the American Psychological Association found that social media use “is not inherently beneficial or harmful to young people” and that there should be more research done on the subject.
In launching their probe in 2021, state enforcers said the company “failed to protect young people on its platforms” and accused it of “exploiting children in the interest of profit.”
The tech giant rejected the investigation at the time, with Meta spokesman Andy Stone saying the allegations were “false and demonstrate a deep misunderstanding of the facts.”
Since then, Meta has unveiled numerous policy and product changes intended to make its apps safer for children, including giving parents tools to track their kids’ activity, building in warnings that urge teens to take a break from social media, and implementing stricter privacy settings by default for young users.
But the changes have done little to pacify its critics at the state and federal level, who contend the company has shirked its responsibility to protect its most vulnerable young users.
For years, Meta has worried about young people spending less time on Facebook, while teens flock to competitors including TikTok and Snapchat. To attract younger users, the company has attempted to replicate TikTok with its short-form video service, Reels.
But the push to attract young people has drawn the attention of regulators who are concerned that apps like Facebook and Instagram hurt young people’s mental health, draw them into addictive products at a young age and compromise their privacy. Meta argues that the research about the effects of social media on young people is mixed and that the company takes precautions to protect users.
After Haugen’s disclosures became public, Meta announced that it was pausing its plans to build an Instagram app designed especially for children younger than 13. Advocacy groups, state attorneys general and lawmakers had urged the company to drop the project out of concern for young people’s mental health.
The company said at the time that it still believed in the concept of a kids-oriented Instagram app because children were simply lying about their age to join Instagram.
The Biden administration is separately scrutinizing Meta’s record on kids’ safety, with the Federal Trade Commission proposing a plan to bar the company from monetizing the data it collects from young users. Meta’s Stone called it a “political stunt” and said the company would “vigorously fight” the move.
While efforts to rein in social media’s impact on kids are gaining steam with legislators and enforcers, they are increasingly running into major hurdles in the courts.
Federal judges recently blocked the newly passed children’s safety laws in California and Arkansas, saying they may violate First Amendment protections and sometimes raising doubts about their efficacy and whether they would actually keep kids safer.
State and federal enforcers for years have scrutinized tech companies’ handling of children’s private personal information, at times leveling huge fines against social media companies. The FTC and New York state in 2019 reached a $170 million settlement with Google-owned YouTube over charges that the company illegally collected data from users younger than 13.
But in recent years, officials have increasingly zeroed in on how tech companies could be exacerbating anxiety, depression and other mental health ills among children and teens.
Indiana, Arkansas and Utah have filed separate lawsuits accusing TikTok of harming kids through addictive features, by exposing them to inappropriate content or by misleading consumers about their safety protections. Arkansas filed a similar lawsuit accusing Meta of violating the state’s rules against deceptive trade practices.