UK News

The generative AI model shows that fake news has a greater impact on elections when it is published at a consistent pace and without interruption

It is not at all clear that disinformation has so far influenced an election that otherwise would have turned out differently. Still, there is a strong feeling that it has had a significant impact.

With AI now being used to create highly credible fake videos and spread disinformation more efficiently, we are right to worry that fake news could change the course of an election in the not-too-distant future.

To assess the threat and respond appropriately, we need to better understand how damaging the problem could be. In the physical or biological sciences, we would test a hypothesis of this type by repeating an experiment many times.

However, in the social sciences this is much more difficult because it is often not possible to repeat experiments. If you want to know what impact a particular strategy will have on, say, an upcoming election, you can’t repeat the election a million times to compare what happens when the strategy is implemented and what isn’t.

One might call this the one-story problem: there is only one story to follow. You cannot turn back time to examine the effects of counterfactual scenarios.

To overcome this difficulty, a generative model becomes practical because it can create many histories. A generative model is a mathematical model for the root cause of an observed event, along with a guiding principle that tells you how the cause (input) is transformed into an observed event (output).

By modeling the cause and applying the principle, many histories and therefore statistics can be generated that are necessary to investigate different scenarios. From this, the effects of disinformation in elections can be estimated.

In the case of an election campaign, the information available to voters (input) is the primary cause and is translated into opinion polls showing changes in voter intention (observed output). The main idea concerns the way people process information, namely minimizing uncertainty.

So by modeling how voters receive information, we can simulate later developments on the computer. In other words, we can create a “possible story” on a computer about how opinion polls change between now and Election Day. We learn virtually nothing from a single story, but now we can run the simulation (the virtual election) millions of times.

Due to the noisy nature of information, a generative model cannot predict a future event. But it provides the statistics of various events, which is what we need.

Modeling disinformation

I first came up with the idea of ​​using a generative model to study the effects of disinformation a decade ago, without foreseeing that the concept would unfortunately become so relevant to the security of democratic processes. My original models were designed to examine the impact of disinformation on financial markets, but as fake news became more of a problem, my colleague and I began the model expanded to study its impact on elections.

Generative models can tell us how likely a given candidate is to win a future election, given today’s data and the specification of how information on election-related issues is communicated to voters. This makes it possible to analyze how the The probability of winning is affected when candidates or political parties change their political positions or communication strategies.

We can include disinformation in the model to examine how this affects the outcome statistics. Disinformation is defined here as a hidden component of information that creates bias.

By including disinformation in the model and running a simulation, the results tell us very little about how they changed opinion polls. However, if we run the simulation multiple times, we can use the statistics to determine the percentage change in the probability of a candidate winning a future election when disinformation is present at a certain level and frequency. In other words, we can now measure the impact of fake news using computer simulations.

I want to emphasize that measuring the impact of fake news is different than predicting election results. These models are not designed to make predictions. Rather, they provide statistics that are sufficient to estimate the impact of disinformation.

Does disinformation have an impact?

One disinformation model we considered is a type that is released at a random time, increases in strength for a short period of time, but then is dampened (e.g. due to fact-checking). We have found that a single release of such disinformation, well before Election Day, will have little impact on the election outcome.

However, if the publication of such disinformation is persistently repeated, it will have an impact. Disinformation that is biased toward a particular candidate, each time it is published, causes the poll to shift slightly in that candidate’s favor. Of all the election simulations in which this candidate lost, we can determine how many of them had the result reversed based on a certain frequency and level of disinformation.

Fake news in favor of a candidate does not guarantee a victory for that candidate, except in rare cases. However, its impact can be measured using probabilities and statistics. How much has fake news changed the probability of winning? What is the probability that an election result will change? And so forth.

One Result What was surprising is that even if voters don’t know whether a particular piece of information is true or false, knowing the frequency and bias of disinformation is enough to largely eliminate the impact of disinformation. Simply knowing about the possibility of fake news is an effective antidote to its effects.

An illustration of a man holding a giant magnifying glass to a piece of paper that says
Alerting people to the presence of disinformation is part of the process of keeping them safe.
Shutterstock/eamesBot

Generative models alone do not provide countermeasures against disinformation. They just give us an idea of ​​the extent of the impact. Fact checking can help, but it’s not particularly effective (the genie is already out of the bottle). But what if both are combined?

Since the effects of disinformation can largely be averted by informing people that it is happening, it would be useful if fact-checkers offered information on the disinformation statistics they uncovered – for example: “X% of negative claims were against Candidate A incorrect.” “. An electorate armed with this information will be less affected by disinformation.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button