Editorial: What the rulings against Meta and Google tell us

MANILA, Philippines – Last week there were two major rulings in the US against Meta and Google, two of the most powerful companies in the world. 

The rulings against them are paltry compared to their net worths: the jury in New Mexico ordered 375 million USD while the California jury ordered 6 million USD in damages; even with market variation Meta is worth over a trillion dollars and Google three trillion. In comparison, this feels like nothing. 

But this is actually a big deal. 

They knew 

The big discourse on these rulings is that this has the potential to be as meaningful a ruling as when tobacco companies were held liable for their products. Tobacco executives knew that their products were harmful to people, and they continued to produce and market them nonetheless, getting generations of people hooked. 

In similar fashion, the designers of the tech platforms, and more importantly the top executives knew that they were designing their products in ways that would be addictive and harmful.

Among those features are the endless scroll and algorithmically-driven ragebait. Both bypass our brain’s defenses and target what behavioral economists call System 1, the instinctive, emotional mind that operates beneath our rational awareness. 

This means we don’t even get to use our System 2, the slower, reasoning side. We get stuck in these intentionally-designed loops and it becomes incredibly difficult to break out of them even when we know we should. 

These are just two examples among many features deliberately built to hijack our attention and keep us hooked. 

Meta knew all this because information shared at the trial was from their studies, brought to light by whistleblowers from inside the company. This morally and ethically problematic decision now has a price tag. 

Keep in mind the way that platforms are designed to maximize engagement precisely because engagement leads to profits. Will the moral costs and these small sums they are being made to pay be enough to push them to change their designs? Or are these costs that they will eat so they can continue as they are? 

This is the gap where we should be building regulation. The California case is a ruling in favor of a young woman who was underaged when she started using Meta and Google products. As I wrote previously, due to laws in other countries and rulings like this, we are considering legislation to protect the young. 

This is an “Everyone” problem

This isn’t a problem of the young. This is an everyone problem. Perhaps it’s particularly harmful to younger people as their minds are still developing. But we have seen that people at every age are susceptible to the harms of social media. 

Whether it’s in developing unhealthy views about oneself, or believing in disinformation that could lead to bad health decisions, or large swathes of the population believing in narratives seeded by hostile foreign actors to undermine democracy, the range of harms is broad and not confined to any age group. 

One way to think about this is to imagine what social media sites build as cognitive spaces. And where in physical space we would bring our bodies, in cognitive space we bring our attention and our minds. 

Now if physical spaces were designed to harm us, and if physical products featured specific design specs that would cause harm while benefiting the product designer, we would ban those products or demand redesigns. 

Here we bring our minds, and on these platforms where we used to connect with friends and find our subcultures, now we are getting radicalized and our minds are getting hijacked. 

One last point here is that whenever I advocate for rules and regulations, there’s always someone who will raise the points of free speech absolutism. That’s its own essay, but TLDR free speech rights protect you from the government silencing your speech, and charge the government with protecting your speech rights. 

But should you violate platform and community standards, that’s not a free speech issue. So the contested space here is more about what platforms allow, and more crucially, what is algorithmically amplified. 

You’re free to post whatever you like on your wall, like the Earth is flat, or that you should only eat raw meat if you want to be healthy. But if the content you are posting is being boosted by the algorithm and causing harm, that’s something that we need to be creating and enforcing standards or legislation for. 

Increasingly, and especially in the last ten years, platforms such as Facebook have been used by oppressive regimes for disinformation and to suppress dissent. Not only have platforms been used, but there are times when these tech companies have stood by while various harms were being committed. 

At the same time we have seen politicians and government officials connected to disinformation operations. It shouldn’t be shocking that tech representatives claiming to be “free speech absolutists” will side with authoritarian governments in silencing dissent. 

What now? 

So with these rulings in hand, we can begin to craft and hone our own legislation that will hold platforms accountable. Remember all of these social media platforms came to us with a promise that we would be able to connect better with our friends, we could deepen relationships, and we could discover new communities to interact with. This technology was supposed to enrich our lives. 

We need to demand that these products be redesigned to serve us again. These court rulings are showing the way. Go back to what the original promise and intentions are and demand that the platforms deliver on that. 

And if they are designing in ways that are harmful to us, we need legislation that allows us to hold them accountable. Make good products, or pay for the harms. Shouldn’t matter how big or powerful a company you are. It should be that simple. 

Read more...