Why The Supreme Court Is Likely Going To Let The TikTok Ban Proceed


Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.

On Friday, the Supreme Court heard arguments in a case that could determine the fate of TikTok for every American seeking to use or publish on the platform: Absent the justices’ intervention, the app will be banned in the United States on Jan. 19 under a law enacted last year. Dahlia Lithwick and Mark Joseph Stern discussed the arguments with Gautam Hans, a professor at Cornell Law School, who specializes in speech, privacy, civil liberties, and technology policy, on this week’s episode of Amicus. A preview from their conversation, below, has been edited and condensed for clarity.

Dahlia Lithwick: One of the ironies of this case is that the court was able to fast-track this oral argument and get all the briefing in really quickly. And yet there are no time limits left at oral arguments. So it sprawled on for what felt like 200 million years.

Gautam Hans: I think my main takeaway from this argument is that this is why judicial deliberation is a virtue in many situations. There are so many complicated issues in this case that I think really deserved a lot of time … but preferably not in a two-hour-and-40-minute argument on an expedited basis. There are important governmental interests here; I think basically everyone concedes national security is one. But what is the tailoring that’s appropriate? Do we think there are real problems with the data economy of collecting massive amounts of data and not really deleting it? How do we think about corporate structures? Does the foreign ownership dynamic versus domestic ownership make a difference?

This is a big swirl; it’s a complicated case. And I started out as a privacy attorney—I am really concerned about data collection. I understand that national security is an issue. Those are all good justifications, but this law doesn’t do a great job of answering the moment, and that’s where I think the First Amendment problems lie.

Part of what is really, really complicated in this case is that there’s two different justifications for the law and that’s sort of looming over everything. There was a drumbeat throughout the entire argument that these two justifications don’t necessarily rub along together with equal force.

Mark Joseph Stern: Solicitor General Elizabeth Prelogar put the rationales behind the law into two buckets. The first justification is that this law is about data protection and security: TikTok is collecting a massive amount of data from users: their age, location, activity, all this sensitive stuff. It’s owned by ByteDance, a Chinese company, so the Chinese government has access to TikTok’s data. And the American government is worried that the government of China will access that data and somehow weaponize it—maybe even turn American users into spies against the U.S. government. I think most of the justices favored that rationale.

Then there’s the second justification, which is that the Chinese government can manipulate speech on TikTok to further the interests of the Chinese Communist Party and undermine America.The second justification seems aimed at speech—it’s Congress saying, “We don’t like this speech, we want less of it.” That seemed to rankle several of the justices across ideological lines.

Hans: Part of the challenge with this law and this issue is that it’s all swirled up with politics. Now, I’m of the opinion that most laws are swirled up with politics. But we have, at the time of its enactment, former Rep. Mike Gallagher saying this is “digital fentanyl,” and won’t, please, someone think of the children. We have other members of Congress complaining that the app is biased in favor of Palestinians. These are all classic First Amendment problems. And what we then get are some really politically inflected concerns about China, and who can influence the American populace.

In regard to your first point about the Chinese data collection, Mark, I think the solicitor general conceded that some of this is future-looking, right? This idea that someday a future president who had liked things on TikTok that maybe were a little prurient is going to be a problem. But that’s not a TikTok only concern. What happens if Iran buys Facebook? Or we go to war with Sweden and Spotify Wrapped ends up being something that’s humiliating? What’s the limiting principle, and why are we singling out one company when we could be doing some better thought out, more generally applicable regulation?

I agree with you on the second point; that concerns about “foreign propaganda manipulation” are just editorial discretion by another name. I think that the challenges here are going to be, as they often are in First Amendment cases, line-drawing ones. And the lines, I think, are not nearly as stable as the court might want us to believe.

The argument really felt like a national security case and a First Amendment case walk into a bar. There’s two totally different cases, and you could feel everybody toggling back and forth, depending on which of those they thought they were going to win in the moment. 

I think all three of us agree that the Biden administration won after the arguments. This felt like a rout, like it would be 8–1 or 7–2. And yet what was really interesting were these confounding First Amendment questions you’re both raising. The court certainly roughed up Elizabeth Prelogar. They were not, in the main, mollified by the answers she was giving. The really central issue here, as Justice Elena Kagan told Prelogar, is that concern about “content-based manipulation” inherently raises First Amendment problems. The government can’t just say, “We don’t like the content-based manipulation that’s coming from this platform” and then pretend it’s making a content-neutral decision to ban it.

Hans: Right. Last year, the NetChoice case asked the question about what it means to have editorial discretion in the context of an internet platform. And I think NetChoice made sure that editorial discretion for internet companies was pretty clearly protected by the First Amendment. The implications of that holding are relevant in the TikTok case, yet I don’t think the justices were there in their questioning. Maybe they’ll engage with it on a meaningful level upon further reflection and deliberation. Or it may be more likely that we’ll get a short opinion that defers these complexities for another day, given the speed at which the court’s going to have to make a decision.

Stern: I just want to add that I think there’s an open question about whether the law can be sustained on one ground alone—data protection or content manipulation—or whether the court has to approve both rationales to uphold it. TikTok and the content creators argued vociferously that Congress relied on both of these rationales, that they’re intertwined with each other, and if one of them is illegitimate, then the law has to fall.

I don’t know if I agree with that. To me, that would be a strong argument if this were a regulation published in the Federal Register by the Department of Commerce. But this is an act passed by Congress, signed by the president, duly enacted into law, that reflects bipartisan concerns of lawmakers whom we have elected to represent us and protect our country. I’m not one for mindless deference to the political branches or flag-waving in the face of serious free speech concerns. But it does seem to me that if these two rationales can be plausibly disentangled, and one of them stands up on its own, then the court has got to let it stand up, and not bootstrap the forbidden justification to kill the whole law. It just seems that some deference is owed to the decisions of Congress, and we shouldn’t encourage five lawyers in robes to overrule its decisions that easily.

Hans: For me, at a fundamental level, I would really love the court to be more thoughtful about not overruling Congress or an agency all the time, just because it doesn’t like something they did. But this is not the case where I want them to start doing that, because what’s going to happen is what we always fear in the First Amendment context—the court could say, “Which speaker do we like today, and which do we not?” Now, I think the court does that all the time. I understand how these things work. But I don’t think we should validate that kind of ad hoc decision-making. We know that’s where we are, but we don’t have to agree that’s where we should be.





Source link

About The Author

Scroll to Top