Google will broadcast advertisements warning users of bogus news
Following a successful trial by Cambridge University, Google wants to display advertisements that teach consumers about misinformation tactics.
Ads for Google Jigsaw, which addresses threats to internet security, will appear on Facebook, Twitter, TikTok, and YouTube.
The videos, according to researchers, helped users recognize deceptive information better.
The “interesting” research, according to Google, demonstrated how social media could actively stop the spread of misinformation.
The study is based on a developing area of research called “prebunking,” which examines how false information spreads. It may be disproved by demonstrating how it operates to individuals before they are exposed to it.
In the trial, 22,000 of the 5.4 million viewers who saw the adverts were polled.
· Researchers found that after respondents watched the explanation films, they were more aware of deceptive approaches.
· An improved capacity to discern trustworthy stuff apart from unreliable content
And an enhanced capacity to choose whether to share content. The peer-reviewed study was conducted in association with Google, which owns YouTube, and it will be published in the journal Science Advances.
The study’s chief author, Jon Roozenbeek, said that its goal is to “reduce the likelihood that someone may be misled by information.”
He stated that it was impossible to anticipate every single instance of false information that would spread widely. But you may look for recurring themes and stereotypes. The question that motivated this study was: “Is it feasible to make individuals more robust to certain tropes, even in stuff they have never seen before?”
Before releasing the movies to millions of viewers on YouTube as part of larger field research, the scientists first tested them with members of the public in a lab under carefully controlled circumstances.
Methodology of the research
Brand Lift, a function on YouTube that shows marketers whether and how an advertisement has increased awareness of their product, is available to advertisers.
The researchers evaluated people’s capacity to recognize the manipulation strategies they had been exposed to using the same characteristic.
People were shown a headline and instructed to read it instead of being asked a question on brand knowledge. When asked to identify the type of approach being utilized, they were informed that the headline involved manipulation.
Additionally, there was a separate control group that saw only the headline and associated questions instead of any videos.
That proved to be the case, as reported by Mr. Roozenbeek. “What you wish to observe is that the group that viewed the movies is right in their identification substantially more often than the control group,” he said.
“Even if it doesn’t sound like much, the control group isn’t always in the wrong. They correctly answer quite a few questions as well.
That improvement essentially demonstrates that you can increase people’s capacity to recognize these misinformation strategies by merely showing them an advertisement, even in the loud environment of YouTube.
According to Cambridge University, this was the first “inoculation theory” field research conducted in the real world on a social networking platform.
The study’s co-author, Professor Sander van der Linden, said that the findings were adequate to advance and scale up the idea of immunization in order to possibly reach “hundreds of millions” of social media users.
However, we also want solutions that can be scaled up for use on social interaction with their algorithms, he continued. “Clearly, it’s necessary for students to learn how to perform lateral reading and evaluate the reliability of sources,” he added.
He acknowledged the doubt around the use of this kind of research by technology companies as well as the general doubt surrounding industry-academia partnerships.
“But in the end, we have to confront the fact that social media corporations dominate the internet information landscape. Therefore, we have developed impartial, fact-based solutions to safeguard consumers that social media firms can genuinely adopt on their platforms.”
“In my opinion, letting social media firms figure things out on their own won’t result in the kind of solutions that enable people to recognize false material spreading on their platforms.”