Pennsylvania's state House of Representatives on Wednesday approved legislation aimed at regulating how online social media platforms interact with children, although its provisions are similar to those in state laws being blocked in federal courts or in a case before the U.S. Supreme Court.
The bill passed nearly along party lines, 105-95, with 10 Republicans voting with most Democrats for it and seven Democrats voting with most Republicans against it.
It faces an uncertain future in the Republican-controlled state Senate, and the nation's highest court may soon decide whether state-level provisions like the ones in the bill can be enforced.
The bill would require social media platforms to allow users to report “hateful conduct,” such as threats or bullying, and publicize a policy for how they will respond to such reports. It also would require users under 18 to get parental consent and bar the platforms from “data mining” users under 18, or sifting through their user data to find specific information or develop insight into patterns or habits.
The sponsor, Rep. Brian Munroe, D-Bucks, said the concepts in the bill are nothing new and similar to age-related restrictions that the government has put on movies, driving, drinking alcohol or smoking, or the parental permissions that are required for things like field trips or school sports.
“Time and time again, we’ve acted in the best interests of children by looking at the exposure to potentially harmful activities and said, ‘not at that age and not without your parents’ OK,'” Munroe told colleagues during floor debate.
Parents and children are asking for such regulation, Munroe said.
The Washington-based Computer and Communications Industry Association — whose members include Google, owner of YouTube, and Meta, owner of Facebook and Instagram — pointed out that the legislation, called House Bill 2017, has similarities to laws in other states that are being challenged in court.
“While the goal of protecting younger users is commendable, HB2017 risks infringing upon younger users’ ability to access and engage in open online expression and could cut off access to communities of support,” the association said in a statement. “There are also significant data privacy and security concerns associated with the data collection that would be required to verify a user’s age and a parent/legal guardian’s relationship to a minor.”
California-based Meta has said parental supervision tools and other measures already are in place to ensure teens have age-appropriate experiences online, and that algorithms are used to filter out harmful content.
The bill's “hateful conduct” provision is based on a 2022 New York law that has been blocked in federal court.
Last year, Utah became the first state to pass laws that require minors to get parental consent before using social media. That law has been challenged in federal court by the trade group NetChoice.
Also, last year, federal judges put on hold an Arkansas law that required parental consent for children to create social media accounts and a California law barring tech companies from profiling children or using personal information in ways that could harm children physically or mentally.
Earlier this year, the U.S. Supreme Court heard arguments in a case that sprang from legal challenges to state laws in Florida and Texas that seek to regulate Facebook, TikTok, X and other social media platforms.
The details of the two laws vary, but both sought to prevent the social media companies from censoring users based on their viewpoints.