Skip to content

Ramesh Srinivasan: Disinformation’s Divisive Effect on Society

By Joanie Harmon
Ramesh Srinivasan, UCLA Professor of Information Studies and Digital Media Arts

Scholar of digital culture discusses the loss of trust and fair discourse among internet users on global issues.

Throughout his career, the work of UCLA Professor of Information Studies Ramesh Srinivasan has critiqued and proposed alternatives to a status quo where technology corporations hold unlimited reach into both the public and private lives of their billions of users worldwide, including Amazon, Facebook, and Google.

Now, in the wake of the COVID-19 pandemic, Srinivasan has called to attention the deluge of disinformation on the internet, with its conspiracy theories, alternative wellness methods, and other content that has the potential to mislead, confuse, and ultimately, endanger users. 

Professor Srinivasan is also a faculty member of UCLA’s Design Media Arts department and the founder and director of the Digital Cultures Lab at UCLA. He has studied the relationships between new (Internet, social media, and AI) technologies and political, economic, and social life in more than 70 countries, and has worked with governments, businesses, activists, and civil society organizations to advise on technological futures. His books include “Beyond the Valley: How Innovators around the World are Overcoming Inequality and Creating the Technologies of Tomorrow,” (MIT Press, 2019), which was named by Forbes as a Top Ten Tech Book in 2019. Other publications include, “Whose Global Village? Rethinking How Technology Impacts Our World” (NYU Press), and “After the Internet” (with Adam Fish), published by Polity Press.

Srinivasan is a regular speaker for TED Talks, and has made routine media appearances on MSNBC, NPR, Al Jazeera, Democracy Now!, CBS, The Young Turks, AtlanticLive, and the Canadian Broadcasting Corporation, BC. He has contributed op-eds and had his research featured in international publications including the Los Angeles Times, The Guardian, Wired,  The New York Times, Al Jazeera English, The Washington Post, FAZ (Germany), The Financial Times, CNN, Folda Sao Paolo (Brazil), BBC News, the Christian Science Monitor, National Geographic, Quartz, and The Economist.

Srinivasan has worked with governments, businesses, activists, and civil society organizations to advise on technological futures, and served as a national surrogate for Senator Bernie Sanders’ 2020 presidential campaign and as an Innovation policy committee member for President Biden. 

Professor Srinivasan was recently featured in a CBS Originals documentary on this digital pandemic of disinformation, “Conspirituality: How Wellness Became a Gateway for Misinformation.” The Latest had a recent conversation with Srinivasan on the dangers of algorithms and the internet’s ability to show users worldwide, “nightmares we have yet to have.”

How was the pandemic affected by the questioning of science?

Ramesh Srinivasan: The main challenge here is that we as an American public and as a global public no longer have trusted forms of authority, or even trusted places we can go that are devoted to all of us and our interests… to figure out what we can count on. The fact that those sorts of spaces – online or not – are missing, means that everything will be questioned. It turns out that questioning scientific knowledge works very well for these technology companies and their algorithms that prey on people’s suspicions and doubts.

Like I said in the documentary, there are a lot of good reasons for people to ask questions about pharmaceuticals, about vaccines… about whether our pharmaceuticals are treating the symptoms of problems or making us more healthy - wellness versus symptoms. There are a lot of legitimate questions to be asked about how big pharmaceutical companies have exploited publicly-funded research, like NSF and NIH research. There are a lot of legitimate questions to be asked about certain forms of commercial and economic corruption. 

The issue is that our digital environments that we have been forced to rely upon, especially in a pandemic, where we have to socially distance and can’t come together physically… our digital environments are not set up to talk to one another in ways where we can be tolerant and patient and understand one another. They’re designed to divide and conquer us.

And that’s the problem, because if an outrage or anxiety-based model is what works to keep people on your platform forever, it’s disastrous for our social fabric. It’s disastrous for our ability as a public to come together and make decisions, not just around the pandemic. It’s a tried-and-true thing about the human being that we have a sympathetic nervous system that gets activated when we are suspicious or when we feel outraged. But being hyper-activated is not the only way for us to be. It ends up draining us and it makes it very difficult to have calm, rational, tolerant, compassionate dialogue with one another. This is going to have spillover effects because there is no basis for people to have dialogue with one another moving forward. That is what really needs to be transformed and changed. 

How do we get to users believing in random individuals without scientific credentials who are just making videos from home about the pandemic, climate change, or any other serious issues?

Srinivasan: The reason why that happens is because we just blindly turn to our algorithmic social media feeds to curate the world for us. There is so much information online. We are all overwhelmed with information and we are all multitasking because our phones basically are always on, and induce us to do lots of things at the same time.  It’s become much harder to pay attention, so to make sense of the world, we are blindly trusting these “personalization technologies.” And they claim, by their branding language, to be personalized for us. But personalized on what basis?

Is the basis just to outrage us and keep our attention, which can turn into addiction? Or is the basis one that is not only serving the interests of a corporation but also serving the interests of the wider world and society and the health of its people that live within it? 

That corporation still exists in a world where people can get sick, in a world where we are having record numbers of wildfires and hurricanes and droughts, icecaps melting? I don’t think their intention is to divide us, but I think they realize that this is a tried-and-true way of getting people’s attention, which is again, hyping up people’s nervous systems and making them on-edge, anxious, defensive, reading [only] titles, not stories, like clickbait. They are living within a world that is being torn apart from within, and they’re prospering off of that division.

This is a function of what’s happened with the internet more widely, where not only has the amount of information exploded on us so that we turn to these corporate platforms to make sense of it all, but it’s also a function of a larger dynamic that has occurred in this country. The public and the investments of the public are the baseline – they’re the foundation by which all this corporate and commercial wealth has been able to come about. 

A lot of Google’s algorithms were in part initially funded by grants for research. The internet itself was funded by United States taxpayers, and there’s no Facebook without the internet. So, the public should be owed not only a debt of gratitude, but should really be respected. It’s public investments that build roads, that build bridges, and that built the internet. 

All of these companies, the biggest and wealthiest companies ever, only exist because of public investments, so they need to be accountable to the public and also serve the public’s interests. These Amazon vehicles are all over the place, I see them all the time in my neighborhood. They use the roads that we all pay for, but… Amazon didn’t pay any taxes last year. We have been betrayed by these technology companies that are resting upon public investments, yet dividing the public.

The Wall Street Journal has done some really good investigative reporting, about Facebook in particular. They released, for example, that internal research was done by Facebook concluding that younger people who use Instagram tend to be depressed – that there’s an onset of depression and other anxiety-related disorders that occur through the use of Instagram. And [Facebook] did nothing about it – they buried the research.

So, [technology corporations] are aware of how bad for our personal and collective psychology the status quo is. Yet, they’re not doing anything about it, and that’s because they’re doing so well. They have so many billions of users, constantly on these [platforms]. I’m looking right now at a billboard on Sunset Boulevard for Oculus, another interface by which Facebook will be gathering biometric information - where do your eyes go, and other sorts of biological information.

The goal is to grab data from us at all times, in all places, in every way possible - without telling us what’s being gathered, for how long by whom – and to feed that back into arousal and outrage-optimized content. But they’re not socially or globally responsible to the people that are facing so much risk and pain.

How would a Digital Bill of Rights address this?

Srinivasan: There are two or three pieces of legislation that representatives in the House have put out, but they’re very, very small-scale things. They’re trying to carve into some of this immunity that technology companies have from publishing stuff that is untrue on their platforms - it’s called Section 230. But that’s the way Congress works.

My Digital Bill of Rights is extremely expansive. It deals with economic issues, surveillance and privacy issues, and disinformation and artificial intelligence issues. It’s really trying to function across the board [and] transform all things digital and data-oriented to be balanced – definitely to serve business and commercial interests, but not at the cost of everything else.

They’ve designed some brilliant technologies, every one of the big tech companies. I give them a lot of credit for it. But let’s think about a design model that is great for them and not destructive of everything else. 

In the CBS documentary, you describe how tech companies exploit users’ fears through “tailored, targeted algorithms. How would you make users more aware of the fact that their own fears and prejudices are regurgitated to them online? 

Srinivasan: The thing is, we’re not only being fed back our own prejudices, but we’re being fed fears that may not have even entered our sub-conscious. For example, I’ve been getting targeted ads related to cancer. I don’t have any symptoms of cancer. I don’t know where it comes from. It’s also maybe my age, and the fact that I’m pretty health-conscious in my 40s. 

We’re not just getting feedback based on our fears or biases or prejudices. We’re actually being fed nightmares we have yet to have. Imagine the rabbit hole of hysteria that can come out of that. This isn’t some manager or editor on YouTube or Twitter, saying, “Let’s feed Ramesh this sort of thing.” These are computational algorithms that are recognizing that [this personalized] content can arouse people and get them all revved up and agitated. 

The Wall Street Journal reporting on Facebook was helpful because it actually uncovered that Facebook recognized that negative content would create more engagement. So, let’s look at that in relation to the gaslighting that’s going on, or in regard to our ex-president, who specialized in epithets and insults. There’s a reason why, until he got deplatformed, that he was the central figure on Twitter – it’s not accidental. It’s true of anyone. The easiest way to go viral is to be hateful. You want to follow the obscene, or the salacious, or the hateful. That’s not me saying that – it’s what the Wall Street Journal uncovered. 

I believe that these kinds of realizations shouldn’t inspire dismay. They should inspire determination to get us on a better path. Otherwise, this is going to keep happening again and again and again – every climate crisis, every pandemic, every scandal.

I just want to say that we can do better. I think we will do better, and I think it’s going to take all of us being aware – recognizing that the stuff that comes our way online is not to be blindly believed or trusted. See it as part of a targeted machine that is a cash cow for these companies. See it for what it is. Ultimately, they should be disclosing to us what they know about us and what is influencing what we see and give us greater power over changing that entire experience. 

For Professor Srinivasan's latest media on the October 4 Facebook outage, visit these links.

Daily Mail

The Independent

BNN Bloomberg

Tags: