Virtual Speaker Series : Why Do We Fall for Fake News, and What Can Be Done About It?

Virtual Speaker Series : Why Do We Fall for Fake News, and What Can Be Done About It?

Show Video

>> PAUL: I see the room filling up. It is Tuesday at 12 PM. It's time for the Virtual Speaker Series presented by the Penn State Alumni Association. Looking forward to today's conversation with Dr. Shyam Sundar. He will be talking about fake news and why we fall for it.

Let us know who you are and where you are from in the chat box. Go ahead and let us know where you are from today. Are you on the West Coast in sunny California, or here in cold and snowy Happy Valley, Pennsylvania? Let us know where you are from. Madison, Wisconsin, and a blizzard. I see you, Dave. Vicki, in Harrisburg.

Emily in New Jersey. I see Pam in Lancaster. Good crowd in Madison.

ICC you also from Madison. Rachel is here in State College. Welcome to Virtual Speaker Series. We will get started in just one moment with our presentation from Dr.

Shyam Sundar, PhD from Stanford University and the James P. Jimirro Professor of Media Effects and the founder of Media Effects laboratory right here at Penn State. Looking forward to his presentation and our conversation. If you have questions for Dr. Sundar, you can put that in the chat box or Q&A.

All questions in the Q&A. We will get to as many as we can. I also see some pre-submitted. We will get to those as well.

Good to see you, as always, Paul, up in Cape Cod. Thank you for joining in. I see my friend joining us or former colleague now that you are retired. Hope all is wrong with you.

Kathy, up in Cambridge, Vermont. Where else would you rather be than a Zoom room full of Penn Staters? Thank you for joining us. I am Paul Clifford, CEO of Penn State Alumni Association. I want to welcome everybody to today's Virtual Speaker Series which is being recorded.

Live close captions are available. You can access by clicking the closed caption button at the bottom of the two window, and then click show subtitles. You can also customize your caption view by clicking the stream text link posted in the chat. We are live streaming today's presentation, and this has been made possible through the support of a donor and's fund for access ideas and audacious goals. Today's presentation will be archived and available on the website after the event.

This afternoon, we welcome Dr. Shyam Sundar, PhD from Stanford University, what is the James P. Jimirro Professor of Media Effects and founding director of the Media Effects research laboratory here at Penn State. Fake news can be deadly. During the pandemic, false information about causes, spread and care related to COVID-19 have shaped actions at both individual and institutional levels.

In places like India, scores of innocent people have been lynched by vigilant mobs after false rumors of child kidnapping and organ harvesting spread in the form of doctored videos shared via WhatsApp encrypted messaging system. Dr. Shyam Sundar with funding from WhatsApp and the National Science Foundation, has been investigating the psychology of our susceptibility to online information, and working on solutions to address this modern media. Professor Shyam Sundar's research investigates social and psychological effects of technological elements unique to online communication. His experience investigate the role played by technological of ordinances in shaping user experience of mediated communications and variety of interfaces from websites and social media, to mobile media and robotics. Dr.

Shyam Sundar edited the first ever handbook on the psychology of communication technology and served as editor-in-chief of the Journal of computer mediated communication from 2013 to 2017. It's my pleasure to welcome Professor Shyam Sundar to the Virtual Speaker Series today. I will turn it over to you.

Good to see you. >> DR. SUNDAR: Thank you, Paul.

Thank you for the introduction. Good afternoon, everybody. Welcome to this Virtual Speaker Series on fake news. Let me start sharing my screen. The topic for today as Paul mentioned, is fake news, and the psychological aspects of it and why we fall for fake news and what we can do about it.

We have studied this in our Media Effects lab for the last 20+ years. Fake news is not new as he might know. Some old-timers know we have had conspiracy theories like the TWA crash being orchestrated by the military, U.S. Navy, back in 1990. That was a controversy that was spread by a credible source that the Press Secretary of President Kennedy, but these kinds of fake news were quickly spotted by TV and newspaper gatekeepers.

When they did happen, they were retracted if the facts did not check out. This was something we saw even before television and mass media. Misinformation has existed well before media itself and before the Internet.

What is the difference now? The difference is many things, but I focus my remarks today on two things that we have done research on in our lab. One, source of fake news. Another is modality of fake news.

Around the turn of this century with the arrival of interactive media, there has been fundamental shift in munication such that traditional models of communication where the source was an expert or professional journalists, and the receiver was simply a passive consumer of information. The equation changed with the arrival of interactive media because interactive media made it possible for receivers to become sources of communication. Receivers became sources.

They become services and creators of content. They create their own content and disseminate what they create on a very unprecedented scale. This dissemination came to a head around the presidential election time in 2016, when these receivers created news stories like this, false information that was circulated via social media leading to what some people call fake news invasion. Most callers like me, did not take that seriously until we saw this chart from both speed, which showed for the first time, fake news exceeded in terms of engagement especially on social media. This happened close to the 2016 election.

As you imagine, false information can be quite important and consequential in the context of elections, politics, where conspiracy theories are plenty and there is strong confirmation and need for people to confirm inherent predispositions. That drives selectivity and what they watch, view, what they believe. We all form our own filter bubbles. Fake news matters beyond politics. It matters in the case of literacy.

We try to teach students how to spark correct information. Literacy in terms of getting right information for research, education, and increasingly for our own health. As you well know, COVID-19 has brought its own battery of fake news about causes, cheers -- cures, vaccines.

and so forth. There are all kinds of fake news about vaccines and how it might cause infertility, microchip by Bill Gates The question is; do we laugh it off? Do consumers establish between fake and real news? Do they really distinguish between professional sources and layperson sources? This was the curiosity I had for my doctorate dissertation back in the 1990s when I did an experiment with undergraduate at Stanford where I exposed participants to exact same news stories. The only thing I did was tell them different things about who or what the source was.

One fourth of participants were told that these stories were selected by news editors in a newsroom. Another one were told this was selected by a special algorithm by the computer. A third group was told other users chose these stories. The fourth group was led to believe they themselves chose these stories. When we looked at data, we found overwhelmingly, people liked the stories much more when they thought other users chose the story compared to when they thought the news editors chose the story. We found the same pattern of the newsworthiness of the story, and quality of the story.

Quality was considered much better when others chose the story, compared to when news editors or they themselves chose. Even though the story content was identical in all conditions. What this tells me is other users or peer sources are quite influential in influencing our perceptions of news stories. This is especially a problem with social media, where we have ramp it what we call social layering. Social layering is the idea that there are multiple sources layered one on top of another. You can see here a story in the LA Times that was treated by Vice President Kemal Harris which was retreated by another person.

Who or what is the source here? Is it the friend that sent it or tweeted about it? Is it the politician chose the story for tweeting? Is it Los Angeles Times, the newspaper? Is it Twitter, the social media platform? Who or what is the force? Our studies and a lot of experiments in my lab have shown that people think it is the proximate source, the source handed off the story to you which in this case is the friend. Source layering what it does is puts the focus on the proximate source, the most immediate source from whom you get the information, and a lot of us get that information from aggregator sites like Comcast or Verizon, or social media sites. Aggregators in social media become source of whom we believe in, but not professional journalists. We don't stop to think they are not trained in journalism and to fact check. This is one of the problems of sourcing. Related issue is self as source.

A lot of us get information on our personalized like cell phones and portals. When we get information in these very personal spaces, we tend to believe that more. We've done studies where we see when people customize their alignment after they customize something, we ask them to add a blog or add a feed where we put in fake health stories like raw milk is better than pasteurized. Sunscreen is harmful to health because it deprives of vitamin D. We find lo and behold, people who customize their alignment are so wrapped up in their identity, that they failed to systematically process this fake health messages.

They are in fact, persuaded by these fake health messages, and proceed to follow the behaviors that are advocated in messages. In summary, why do we fall for fake news? We undervalue professional sources like expert journalistic sources. We ignore problem of layered sources. The value other users over professional sources. We scrutinize information less when it comes to the context of a very personal space. That is one story, the story about sources.

The second story is about modality of fake news. Power of images and video. It used to be that when we look at seeing something it is believing. This is something happening increasingly in fake news as well. We are believing in fake news because we are seeing it with our eyes. The modality of fake news has changed.

Previously, most rumors on social media or text-based. Increasingly, misinformation is coming in multiple modalities. These rumors up here in mature modalities with pictures, audio, video. These rich modalities can have deadly consequences. In my native India, for example, through these encrypted platform of WhatsApp which is owned by Facebook, there are several cases of false rumors through WhatsApp sent via video.

Here's an example of how a video had deadly consequences. (Video Playing) >>: Two men snatched a young child off the street in a video spread dangerous panic in India. This footage was found in this street in Karachi, Pakistan, North India. The full video makes clear this was not a real kidnapping, but advert designed to promote child safety. Diversion spread on social media has this ending edited out. news. In India, at least eight people have been killed in spate of lynchings, mob anger fueled by this and other fake This is where you come the video.

This advertising company made the video. They are horrified as how it's used in India. This is devastating for me and shocking. I don't have words. I told you earlier, I want to see the face of the man who edited the video for bad purposes. The video was produced for this charity working on abducted children. We made the video to help society, but it's used wrongly and people are dying. We condemn this.

Whoever is responsible should be prosecuted. The makers of this short film try to save children's lives in Pakistan and now coming to terms with the effects across the border in India. >> DR. SUNDAR: We were motivated by this incident and series of incidents like these to ask the following question; coded use of video be the reason why WhatsApp rumors are having such deadly consequences? Does modality make the difference to the believability of online fake news? If so, why? Modality has history in our feels in terms of psychological processing. Modality is something that has become variable because news comes to us not just in terms of text, but with a lot of richer embellishments like images and videos. Each individual modality, text, picture, audio, contains unique characteristics and processed very distinctly.

This is the psychological truism. Beyond that, you need to think about how they affect us differently. We found in our studies that video is generally more memorable than audio. Text and picture story is more memorable than text-only stories.

More memorability does not necessarily mean people are processing story details any more deeply. Or that they are critically evaluating that information. If anything, richer modalities lead to more shallow processing.

What are the consequences of shallow processing? It leads to what we call heuristic processing. We follow heuristic or rules of thumb like long message is strong message. If it comes from the trusted source or expert source, person in a white lab coat, it's more believable. We don't pay attention to the content, but the aspects of the content. That's what we call heuristic processing. information. It's contrasted often with systematic processing, which is more analytical, comprehensive, very scrutinized

We are all cognitive misers. Our default is to resort to heuristics and in order to make expedient decisions. In this particular context of WhatsApp and video, the modality of media presentation can be triggering certain heuristics that is our hypothesis which can lead to how we perceive that content.

We figured that the realism heuristic would be the one most applicable, which is rule of thumb that if it seems real, if I can see it, it's believable. We hypothesized that pictures and video would lead to greater realism heuristic, and therefore, higher evaluations of credibility even though the story is fake. We conducted this study in India with different stories and different content domains like health, crime and politics. We created three different versions of each story, text-only, audio, and video version. As we hypothesized, we found people found it much more credible who believed in the fake news more if it was shown in video format than audio. They believe audio format more than in text, even though the content was identical across three conditions.

Furthermore, we found people were more likely to share the stories with family and friends if it came to them in the video. In our data analysis, notice the realism heuristic was the reason why this effect occurred. We parsed this out by showing people who did not know much about the issue, people who didn't know about the topic of the new story or low involvement, or the once more likely to fall for this realism heuristic.

There are people who knew something about the topic and not likely to fall for the realism heuristic. This tells us right away that if you are not thinking about this topic or story, if you don't have the necessary background, you are more willing to fall for the heuristic or superficial, or shallow processing that has been triggered by rich modality of presentation. In our interviews with participants, we saw this come up over again where people say things like I could see it with my own eyes. Is white it cannot be false.

Even though if you look online for tips to spot fake news, this is on Facebook, you see they clearly talk about how you should seriously consider the photographs and manipulated and videos could be taken out of context. This is one among many. Another is about source that we talked about earlier, where the source of the story is important for you to consider. Even with mainstream sources, the bias that sources have, there are sites like all sites. com where you can go to find out which media sources are clearly on the left or left-leaning, or which ones are center, right-leaning, or clearly on the right.

You should factor that in when you consider a piece of information before you decide to accept that piece of information. This has become the new literacy where we are being taught on how to spot fake news where we need to consider the source. We need to read beyond, see if other sources support it, or simply satire? This has become even more important now, even more vital in the context of the pandemic where we have all kinds of fake news about COVID-19 vaccines. We have really far-fetched ideas about what the vaccine can and cannot do to us.

This has become an important necessity of media literacy and news literacy as necessary part of our consumption. We have to apply this literacy to our perception of fake news where we not only become more literate about sources, but also more attentive to digital manipulation. The unfortunate reality is we live in an age of information overload.

We are consuming news all around us from various different media from smart phones, social media and so forth. This scrutiny, this vigilance is not really tenable something humanly possible for us to do all the time. What can be done about it? Our latest solution is think of using machine learning to automate this process in ways that can make it easy for platforms to tell us whether something is fake or real. In one of our projects conducted along with the College of information science and technology, we are in the process of developing algorithms to detect fake news so we can compare with humans and their ability to suss out different aspects of modality and numerous other things, to figure out if information is fake or real as that information comes in.

This is something that is supported by NSF grant, National Science Foundation grant, where we are systematically, the network aspect and how it is circulated and so forth. Keep in mind, fake news is not just one thing. We already wrote an article about how fake news can be many different things.

It's not just false information. It could be polarized commentary, satire, plain misreporting. We need an elaborate algorithm that goes through decision tree that rules out these different aspects.

We use all these features to train the machine learning algorithm to detect fake news based on fact check fake news versus fact check real news. This is the ongoing movement in the communication and technology industry to find automated solutions to flag fake news for us. When this grant came along a few years ago, there was a new story about our grant where the Penn State press released a news release about it, but pretty soon the news media picked up on it and made it a take news detector. There are lots of headlines that talk about take news detector when all we said we would be training using machine learning to train machines.

This detector became the darling of the media are we had plenty of news publications talking about detectors. Ultimately, and the reason I'm showing you this is to show you how funny enough, there is fake news about our fake news project. Pretty soon people started building on this information and another news outlet called it a device. We kept getting calls for how we can plug in this device.

There were all kinds of embellishments on our story where people began to think there is a chip we put inside the phone. Several news outlets took liberty with our press release and started characterizing our project. In terms that are well beyond the scope of the project. Some said that our algorithm will purge stories and actually take out fake stories. We started getting phone calls from reporters worried that their stories would be purged and they would -- they were worried that our automated system might not recognize their true stories for what they are.

We were onion dated with social -- inundated with media chatter on our study and how it might be regulation of free speech. People wrote to the National Science Foundation talking about how it is unfair to sponsor such research. Always said in the news release is wouldn't it be nice if computers and mobile phones told us which news stories are real and which ones are fake? AP went along and added a detector to it and weeding out fake news and purging fake news came into the picture. Other news outlets started spreading that and the algorithms became devices.

We were only talking about detection, not purging. This unintentional misinformation is another problem, another hard problem to solve in addition to all kinds of deliberate misinformation that I talked about earlier. With that story, let me stop and I'm happy to take questions. I thank you for your attention. I must thank National Science Foundation and WhatsApp and also assisted by my assistant. Thank you.

>> PAUL: Excellent. Thank you for that presentation. We do have a lot of questions coming in.

A lot of questions that were pre-submitted. Let me try to get to some of these that have been pre-submitted first. You talked about identifying fake news.

How can users immediately and easily identify fake news and sources and outlets? Many people aren't going to have that graphic that you had in your PowerPoint on their desktop every time they read. Are there key things to look out for? >> DR. SUNDAR: Yes, some of the key tips you see repeated over and over again involve paying attention to sourcing. That is something I talked about. Who is that coming from and the motivation of sources? Very important to consider.

If it looks too good to be true, if it confirms biases, it's along the lines of what you're thinking, that's another kind of thing to give you Paul's. You need to pause and say this is too good to be true. Digital manipulation, a lot of stories especially during crisis times, during hurricane Harvey, people had all these pictures on their social media feed about crocodiles floating about in their backyard. These are images you can immediately do reverse Google search and make sure this is not associated with this particular story. Often, these are old images used out of context.

These are dominant ones that you should pay attention to. Sources usually the best giveaway and source, and ultimately, when we talk about sources, we want to triangulate make sure there are two sources saying the same thing and two very different sources. In general, not social media sources. >> PAUL: One of the dangers of fake news is that it is distributed on social media, which makes it so easy for somebody to read headlines and hit like or share. That distributes it to a broader network.

How can the citizens help to solve or reduce the problems of fake news? >> DR. SUNDAR: Sharing is a very big reason why these fake stories become so viral. There are studies that show as much as 60% of URLs shared are shared without clicking.

People don't even click on the story to find out for themselves, but they share because the headline seems to be interesting or sensational. We have a new wave of movement so to speak which is the movie -- move of click Bay. They are looking for clicks and beating you to click. These click Bates are converting to what we call share debate because what people are doing is not really clicking but sharing.

In several platforms like WhatsApp there is now an effort to limit how many people you can share things with. You cannot forward to more than a few context. When these WhatsApp murders happened in India, that's one of the first thing that WhatsApp did is limit the number of people who you can forward particular piece of information. That way you can contain it. People can be more proactive in trying to be gatekeepers.

Trying to make sure they are much more deliberate in sharing, rather than sharing for fun or align with their points of view. They need to exercise caution. Often in interviews with participants, they often said they shared because they think it will be helpful to others.

That is one kind of sharing. There are others that share because the the political viewpoint is magnified by that story, so they use that as the way to proliferate that particular point of view. Thinking about intentionality is also very important. >> PAUL: You said something that has sparked questions in the chat box. You said reverse Google search. Can you talk about what you mean by reverse Google search? >> DR.

SUNDAR: You can Google reverse Google search. You put an image and can actually whatever image you see, you can feed that into a search engine, just like you put search term in and Google. You will get to see the pedigree of that particular picture or image and you get to see -- in fact, one of the people in the chat have posted how you can do Google image search.

>> PAUL: Can you talk about -- it seems like whenever somebody might identify fake news, it doesn't take too long for somebody to post links from Snopes. com. Either affirming or debunking whatever that fake news might be. How legitimate is snow? Is that a good source for trying to debunk fake news? >> DR.

SUNDAR: Snopes and political fact have established themselves as good thing checking organizations over time. I'm not here to endorse one product over another but Snopes is one of the ones that have reputation and doesn't mean they are always correct. They do make (Indiscernible) and I recommend double checking. Not just looking at one source. Studies have come out that show fact checking works. If you note social media contact spreading fake news and in your response you put fact check version of it, they usually are standing corrected.

There are people who control you but in general, we find people change their mind if given convincing evidence from fact checking organizations. >> PAUL: We have all seen this happen; we have seen family members, friends post something on Facebook or share something on Facebook, and then 100 comments later the conversation has just deteriorated into personal attacks. People trying to convince them that what they have shared is wrong. A couple questions. First, you said you studied the psychology around this.

What is the psychology around the person who has initially posted something and the absolute belief that it is correct regardless of how the conversation has rolled out? That conversation never ends with the person who posted the original piece saying you know what? I was wrong. What is the psychology of the person who wants to double down on what they have shared? >> DR. SUNDAR: The first part of that sharing activity is the psychology of scooping.

If you are a journalist you know the tendency or want to be the first to get the story out. We are all at some level wanting to get the story out to the network. Try to scoop the story, so to speak, which inevitably means we don't do a the rope fact verification in double checking. Furthermore, if it is something well aligned with your point if you and you think you have a vested interest, you dig your heels and and become identified with it. That is kind of what I said earlier about if it comes in your personally space, your identity is wrapped up that any attack on that information is an attack on your identity.

It's very important for people who are questioning this, to make sure they deal with it in ways that does not attack the identity of the person promoting the false information. There are all kinds of tips for dealing with this situation. If you like, I can show you a couple tips on my screen, if I can share the screen really quick? This is published information, but how to talk to your relatives about this. This is a very important aspect. First, assess how willing they are to listen. If they are people highly motivated in that plaintiff you and they systematically process that information, not heuristically, they probably will not be very willing to listen to opinions.

You have to be careful about whether it's important enough battle to fight. You can go private and direct message. Doing it in public and not embarrass them. Don't attack them. You should ask questions.

You should frame this as both of you being in a common journey to find the truth, rather than you getting into a battle with that person. Should not overwhelm them with scientific jargon. Try to find common ground with each other. What are some basic established facts between you two? What are sources you both trust? You have emotion involved. You have to establish your credentials that you know what you are talking about. Have to tailor the message such that their conspiracy theories might not be the same as we see in others.

You should also have discussion about sources. Agreement among known experts that they might respect. Talk about what is true and not just rejecting them. Some people don't know how the whole social media network works, so educating them about that as well is very helpful. These tips are available for talking to relatives about your -- about these fake news spreads that happen, especially on encrypted platforms like WhatsApp where it's difficult for corrected action to be taken by others. It has to be between you and the person sharing.

I can provide links. It's science behavior. If you were to look this up online, this is basically (Indiscernible) and you will find it. You can say dealing with misinformation and you will find it.

If not, I'm happy to follow up with you. >> PAUL: Someone in the chat box is asking for the definition of vagueness. There are other questions related to that around sometimes we hear different opinions. One side calling and other side's opinion or position fake news. That may not necessarily be true.

Is it safe to say good definition of fake news is news intentionally put out there to mislead the audience? >> DR. SUNDAR: Yes, there are many different kinds of fake news. We talked about fake news is not just false information. There is also propaganda, what we call native advertising. They are masked to look like news stories.

The intent is one thing. There are certain types of fake information meant to deceive. A lot is meant to get clicks. Kids and teenagers in Macedonia that manufactured fake news about Hillary Clinton and Donald Trump during the election were doing that to see them in all these social media platforms so they can get more clicks make money. They were not looking to deceive anyone for political purposes, but financial incentive.

Looking at intentionality will not get us the true answer. Rather, it will help us understand where they are coming from and what might be underlying motivation. For false news itself, we need to consider whether it aligns with facts, a true event that occurred. Did the Pope endorsed Trump or not? Like that.

We verify that with multiple sources. That is usually the way to verify rather than go deeper into intentionality, which is often difficult to find. >> PAUL: What would you say -- you show a great video, and event that was filmed in Pakistan, but had implications in another country.

Are there legal implications to what the person who edited the video and disseminated that in India, are there legal implications that it led to the lynchings you have talked about happening as caused related to that fake video? Are the people who produce the video, is it criminal activity that happens? >> DR. SUNDAR: The people who produce the video or the ones interviewed in that BBC news story. That is the ad agency that put out the video for the purpose of cautioning people about how indeed, it is so easy for someone to kidnap a child. It only takes a few seconds. What other state was misappropriate the video and took out that cautionary message at the end and made it look like there are people in your neighborhood, shady strangers coming to kidnap your children.

Everybody who was a stranger in the town or looked like the person in the video, was then rounded up and lynched. The person who first edited that out and push that through, those are the people legally liable. Even before getting to that legality, they are violating terms and conditions of the platform. YouTube for example, has certain terms and conditions. They keep issuing new ones and came out with one for COVID-19.

They would not allow any videos that talk about how the vaccine is a hoax. The platforms themselves terms and conditions are violated and that is the first line of defense when they do these things. Furthermore, when there are these deadly consequences, they are even more criminally liable. The challenge is to catch who does this and what intentions are. It certainly is a legal issue.

I'm not a lawyer, but certainly has consequences. >> PAUL: Do you see laws being passed specifically targeting fake news either here in the states or abroad? >> DR. SUNDAR: Historically, we have had laws that talk about information that could be detrimental or malicious. I can certainly see how false information of certain kinds, especially with certain motivations could have the kinds of legal consequences. It is a very tricky situation because satire, for example, would not be something you should penalize as long as it is fully disclosed that it is satire. The problem is, the person who takes the story from onion and sends it across as if it came from a real story.

That is the subtle change that is actually problematic from legal point of view. It allows to catch up on misappropriation, which I think is definitely on the cards. It's a very slippery slope because of first amendment issues.

>> PAUL: I thought one of the more interesting slides he put up was the one about who is the source? Is it the person who shared on Twitter? Is it the politician? Is it the original new source? Off to the side you put or is it Twitter? Is the platform the source? I know that has become the topic of recent conversation. What do you think the responsibility of the platforms for these information is being disseminated is in terms of policing the information on their platform? >> DR. SUNDAR: This is the hot button issue right now in the tech industry, especially with Twitter banning Trump for good. Before that, for about two years, there was this debate going on. Soon after the 2016 election, fake news fiasco Mark Zuckerberg of Facebook, shunned the idea that Facebook would be serious platform for news and people knew better. We should not be in the position of being a publisher.

We are not a publisher, we are a newsstand. The reality is more vendor who sells the newspaper rather than publisher who publishes or responsible for published information. That was the stance of these platforms. Increasingly, that is changing as you see now.

If indeed, platforms start using these kinds of terms and condition violation as grounds for removing people, that means they are looking at content. They are gatekeeping content. At the moment you get into gatekeeping content, you become a publisher. The moment you are seen as a publisher, you become legally liable. In some ways, this is a trap, so to speak, for social media platforms.

On the one hand, anybody and everybody should be able to have their say. On the other hand, some of what is being said and broadcast are having deadly consequences. Some people are broadcasting vile stuff life through platforms and there needs to be oversight or detection. At this point, policy has not caught up with this reality. It is still debated and there will be all kinds of debates in the next few months and years about this issue.

We don't have very clear answers to this. There are definitely bad situations. >> PAUL: Whitney is asking is there role for MBT I or EQ I and identifying fake news? >> DR. SUNDAR: We are working towards solutions, automated solutions for identifying fake news. Certainly, a kind of list of rules of do's and don'ts that you can follow to try and identify fake news. Still, it requires vigilance if done by humans.

Right now, automated solutions seek to be -- seem to be the way to go. Fact checking organizations are relied upon Facebook and others to have armies of people fact checking and moderating these stories. If automation is used at low level to escalate human scrutiny and then the human moderators take the ultimate decision. Content moderation is the key going forth. It's combination of machines and humans.

I don't think we will have any metric that is accepted. It's going to be platform specific for the most part. >> PAUL: Have you found in your work that there are particular groups or demographics more susceptible to fake news than others? >> DR. SUNDAR: We study this in the context of WhatsApp.

We found a lot of these WhatsApp killings happened in rural areas in India. When we recruited participants, have came from urban areas, and the other half from rural areas. We systematically looked at data to see if rural participants were more susceptible to the effects of video. We found indeed, rural participants were less likely to be aware of digital manipulation than urban. Interestingly, we found that sharing was much more common among people who are highly educated. People who are wealthy and not necessarily people who are rural.

When it comes to making things go viral, pretty much everybody is capable. Not just people less educated or lower status of society. We are not finding big differences. We see some deadly effects, the mob mentality happens in certain parts that are associated with lower education or lower socioeconomic stance. >> PAUL: There are so many questions that we don't have time to get to. This is definitely a hot topic now as we hear about it referenced almost every day in one corner of social media or the other.

We want to thank you for joining us here on the Virtual Speaker Series today. Thank you for your presentation in joining us. >> DR.

SUNDAR: Thank you for having me. >> PAUL: I want to thank everyone who joined us on Facebook live and here in our Zoom room. We will be hosting additional speaker sessions in the coming weeks and months.

This programming is in addition to an array of online career networking events offered through the Penn State

2021-01-29 14:34

Show Video

Other news