My Take On 230
A good friend on social media asked for my opinion on why Donald Trump would be so adamantly opposed to Section 230 of the Communications Decency Act of 1996. For years it is precisely Section 230 that has allowed him to expand his unedited voice and create his vast following. Now he’s banned on most of these platforms including Twitter and Facebook, which some would argue are at long last exerting a form of editorial oversight. Rather than hide behind their legal ability to allow him to rant, they have essentially silenced him.
Ironic, huh? Not exactly what he wanted in limiting this broad permission.
Has something good or bad happened? I think the answer is neither, but something evolutionary is unfolding, and depending on where that takes us, we can decide later like most history if it was good or bad.
Confusing stuff, no question. Let me try to unpack some of it as someone who has been working in this space almost since day one of the commercial internet.
While personally I would say my life has improved without the constant noise of Trump tweets, I’m afraid the world is not that simple. The resolution of this exercise may have frightening connotations in the abstract. Many are worried about free speech and arbitrary limits on the power of a single individual to curtail the public expression of another, which is something that matters dearly to all of us.
I’m not a legal professional by any stretch, but I don’t think a specific defense of ex-President Trump is what matters here. Trump no more understands Section 230 than he understands global trade and tariffs. He wants his speech free and speech against him controlled, like any dangerous autocrat. Let’s set him aside (doesn’t that feel great?) and think about the real risks and privileges waltzing into the arena of public discourse.
For reference, the historic 26 words that constitute Section 230 read: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”
As simply stated as possible, that means the technology platforms are not liable for what they publish. They don’t want to be considered authors, publishers, or broadcasters. If the Wall Street Journal prints something that bothers you and you think is unfair or sloppy, you can sue it. Same with legacy brand survivors like CBS, NBC, ABC, Fox, CNN, Us Weekly, or your local talk radio station. You can sue the person who said it or wrote it, too. If you think you have been libeled, you can sue everyone. You are way more likely to lose than win, but your case can be heard in court.
These kinds of traditional media companies have accepted the responsibility to abide by legal standards of accuracy and honesty of some sort, and they must stand by the messages they share. Mostly they print retractions when they find themselves wrong, but that doesn’t stop you from seeking damages. It’s an imperfect system dependent on evolving standards, and whether we like it or not we have learned to live with it.
If you don’t like what I say about you on Facebook or Twitter, you can sue me. You can’t sue Facebook or Twitter.
What’s the difference? Section 230.
Why is there a difference? That’s what’s about to be debated heavily.
Why was the exception created? That will also widely be debated in the months and years ahead, but having been there at the outset, my sense is that it was because federal lawmakers wanted the internet to grow. They wanted to increase free speech, so we all could bring our voices to the marketplace of ideas. They probably had an inkling some of us were wacky and would make up lunatic fringe falsehoods like QAnon, but they also knew if they held the platforms liable for everything published, very little would get published. The internet would have the same filters on it as traditional media, a funnel and a gatekeeper on opinions that limited expression with editorial oversight. They hoped for something more accessible.
Remember, this was a quarter-century ago. Better angels were optimistically anticipated.
The problem here is the division is not clean when all of our voices are collected. If the technology platforms exert no control, we have the chaos we have experienced. If they exert traditional editorial control to manage or reduce liability, all internet dialogue becomes gated, and as a practical matter, the scale of the task makes it impossible to be done by humans. That would put the editorial control at the mercy of algorithms, which at this point in their evolution given the nuance of language will be even less successful than humans.
That brings us to the present conundrum. If a platform now and again edits a comment to conform to its terms and conditions, has it crossed over to becoming an editor liable for everything else on the platform? According to current law, as private companies, these platforms have a right to state terms and conditions and assert the right to enforce them.
The real question becomes whether multiple infringements of terms and conditions can justly lead to the banning of an individual, like Trump. This is the heart of the matter: Do we want an individual company or CEO deciding who gets to have a public voice and who doesn’t?
I think the banning of Trump is going to open a huge can of worms to the platform companies because they just made policy on the fly and that can’t be extrapolated fairly.
Free speech is an interesting corollary, but only because we largely understand it must have limits to work in practice. Today we know there are legal restraints on free speech because it has been tested and adjudicated. While we now understand that a Nazi group had the right to march in Skokie, we also know that is not the same as yelling fire in a crowded theater. We didn’t always know that. It took a lot of time and argument to unfold and reveal itself to multiple courts. It’s been messy, and yet free speech survives.
I think we’re there with Section 230. It’s the right big idea, but 25 years later with wildly consolidated corporate power and big new media money at play, it requires a great deal of interpretation, nuance, and finesse. It’s no more an absolute than free speech. Yes, we really can disallow direct, personally threatening hate speech without fully destroying the First Amendment. The reasoning is not straightforward except in hindsight, when we consider the more pernicious alternatives.
Regulation here is our friend, not our enemy. My sense is the dialogue we need to have is not about throwing out Section 230, but reasonably debating the rights and responsibilities of social media platforms without making them liable for every post crossing their servers. Here is where it gets even more tricky, because the law clearly allows a private business to ban an individual for violation of its stated terms and condition, yet provides very little in the way of enforcing those standards evenly beyond obvious discrimination.
One person gets banned, another does not. How does one challenge or appeal the equal application of silencing rules? In the final analysis, what ensures us or at least gives us confidence that authority is anything but arbitrary? There is no such thing as goodwill or trust when the profit motive of the platform benefits enormously from throwing kerosene on the fire of controversy—fueling viral engagement equates to generating revenue—yet it can eliminate its critics at will under the guise of decency. That is a mega problem we aren’t even close to solving!
We don’t want to make the economic consequences of our discourse addressable only at a practical level by silence. Likewise, we don’t want any business individual with a profit motive to have the power of doling out silence for convenience. Hearst had that kind of power. Zuckerberg can’t.
The Trump legacy may be the bookends that form around Section 230, which clearly are necessary because the platforms are neither fish nor fowl. This is new ground. Internet platforms are not voices per se, but the application of needed editorial standards around facts and lies does not make them voiceless. As I write often, technology advances much faster than our ability to understand its ethical consequences.
Sadly, this morass is likely to be argued largely on economic grounds, because the remedies surrounding liability are compensated in our system through cash settlement of lawsuits. The key problem with lawsuits is they favor the well funded, and while legal, that will never approximate the ideal of fairness. I think there is a lot more at stake than whether a company might be brought to bankruptcy paying fines and settlements, which might cause it to be overly cautious, or bold and flagrant if it has deep pockets to defend itself. Financial penalties can’t be the point, be they absorbable or game-ending. There is a public-interest necessity in our ability to express ourselves. Our government has to protect that and let the business of the internet expand.
Yes, we can.
As for Trump’s point of view, he has demonstrated repeatedly that he only cares about what serves his agenda, not nuance or principle. He has succeeded in blasting open this door, but his own point of view remains self-serving. He is purposefully ignorant, a blunt object in a fragile ecosystem that requires reflection.
We are once again facing the question of whether we do truly relish the marketplace of ideas, or if this only matters when it is safe, convenient, and nominally polite. We don’t need to open the door to criminal insurrections that put our democratic nation at risk; the off switch just worked well in that regard and I’d comfortably welcome it again, if for nothing more than a badly needed time-out.
We have addressed this before, however imperfectly, and I have great faith that given the breadth of legal minds in our nation we will begin to solve it again. Trying to make it an either/or decision is a fool’s errand. We need to retain the big idea of Section 230 and add some guard rails. Once they are tested, we can adjust them. This is likely to be a combination of legislation and judicial resolution. It will be slow and complicated and evolving. It’s worth the ambiguity to sort it out carefully.
Let the real debate begin.
Source: Corporate Intelligence