Senate Judiciary Hearing Exposes Social Media Giants' Role in Child Safety Crisis

On January 31st, members of the U.S. Senate Judiciary Committee held a hearing denouncing five major social media platforms, accusing them of ignoring harmful content on their platforms that specifically targeted children. The platforms were accused of being the driver of the youth mental health crisis and the culprit of facilitating child sexual abuse images and material online. 

In the three-and-a-half hour-long hearing, the five companies’ executives—representing Discord, Snapchat, Meta, X, and Tiktok—were grilled by lawmakers who accused them of having “blood on their hands.” The hearing showed a rare display of unity in Washington, as senators united around their criticism of big tech and a need for new bills to protect users from online platforms. 

Mark Zuckerburg, the chief executive of Meta, and Shou Chew, a TikTok executive, received the most attention based on their platform’s reluctance to promote legislation enforcing online child safety. Zuckerburg was even told by Missouri Senator Hawley, “Your product is killing people.” 

Other platforms were not spared from questioning, as Snapchat’s Evan Spiegel faced grueling questions over Snapchat’s role in online drug sales, while Discord’s Jason Citron squirmed and stumbled when questioned about supporting new regulations on social media platforms. 

In response to the questioning, tech executives pushed heavily to show actions that were being taken and their support for moderation and user protection. Snapchat and X publicly endorsed the bipartisan Kids Online Safety Act, while Tiktok said it planned to invest 2 billion in safety efforts focused on the US market over 2024.

Discord followed suit, outlining a policy that removed encryption on user messages and allowed law enforcement to read every message sent, while Zuckerburg stood up to directly apologize to the families of victims that had gathered in the room. “I’m sorry for everything you’ve all gone through, it’s terrible,” he remarked. “No one should have to go through the things that your families have suffered.” 

Despite the social media companies’ pledges to do more, politicians remained frustrated about the lack of change. Senators from both parties began to focus on a decades-old law, sometimes referred to as “the twenty-six words that created the Internet” by Section 230 of the Communications Decency Act. 

Section 230, passed in 1996 to allow the internet to develop when it was first created, states that “no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Essentially, it offers broad protection to tech platforms from lawsuits based on what someone else has posted on their platform. The large liability shield has always been under attack, with some saying the law is too broad for today’s modern internet. January’s hearing was not the first time 230 has been targeted by the government—over the years, it has weathered court challenges, attempts at repeal or reform, and even attacks from former President Trump. 

While the law has always had strong defenders, in the tech industry and from both sides of the political aisle, the recent hearing began to cast doubt on the law’s future. Senate Democrats and Republicans expressed deep concern over tech companies’ unsafe practices on their platforms. With many leaders of powerful committees in opposition and some even a part of Congress when 230 was passed, opposition to the law has never been stronger. Even President Biden expressed concerns, calling for changes that would make it easier to hold tech companies responsible for the spread of harmful content. 

“I urge Democrats and Republicans to come together to pass strong bipartisan legislation to hold Big Tech accountable,” said President Biden. “We’ve heard a lot of talk about creating committees. It’s time to walk the walk and get something done.”

Alongside the legislative branch, the judicial landscape may also be changing much to the detriment of social media platforms. In early January, a California state judge ruled that Snapchat could not use Section 230 as an immunity shield, and must face lawsuits from families suing over the sale of drugs to children on the app. The court’s decision determined that Snapchat would be forced to face accusations ranging from product defects to negligence leading to wrongful deaths. Following an earlier decision to allow lawsuits against Meta, Google, Snapchat, and Tiktok, the recent cases signify a potential turning point for big tech, and their broad enjoyment of protection under the government.

For those advocating to reign in tech corporations, the path forward is unclear. Last May, two separate attempts to hold social media liable and to weaken 230 protections failed, as the Supreme Court declined two cases involving X, then called Twitter, and Google. In Google’s case, efforts to limit the scope of 230 were rejected by the court, while in X’s case, the court ruled that the activities of social media companies did not fall under laws about abetting terrorism. 

Whatever the path forward, it’s clear that there will be big decisions made about the future of tech, for the better or the worse.

Michael Ji is a Sophomore at Horace Mann School, located in New York City. As a contributor, Michael is passionate about public policy and other pressing issues affecting today’s youth, and seeks to provide a space for civic engagement and political discourse. Outside of writing, Michael enjoys listening to music or reading the news.

Previous
Previous

False Equivalency For Celebrated News Outlets Is A Major Threat To Democracy  

Next
Next

Yellow Flag Legislation: Understanding its Role in the Lewiston Shootings