Platforms still aren’t doing enough to tackle disinformation related to the coronavirus crisis, the European Commission said today.
In a Communication it is pressing tech platforms to produce monthly reports about their efforts in this area, asking for more detailed data on actions being taken to promote authoritative content; improve users’ awareness; and limit coronavirus disinformation and advertising related to it.
It also wants to see increased cooperation from platforms towards researchers, and fact-checkers in all EU Members States (for all languages), along with increased transparency around the implementation of policies to inform users in instances where they interact with disinformation.
In recent years the Commission has pressed platforms for action to tackle misinformation — signing up tech giants and adtech players to a voluntary Code of Practice on disinformation focused on disrupting ad revenues and empowering reporting of fakes.
Since then, its assessment of platforms’ efforts to tackle malicious fakes has been lukewarm to say the least, with repeat calls for them to do more. It has also repeatedly called out a problematic ongoing lack of transparency related to these self regulatory efforts.
The coronavirus crisis has further amped up political pressure on platforms over their handling of online disinformation — and tech giants such as Google have responded with some measures aimed at pro-actively surfacing authoritative health information alongside coronavirus content (initially focused on the US, in its case).
Back in April, Facebook also said it would alert users who have interacted with certain types of coronavirus misinformation — displaying a debunking pop-up with messaging from the World Health Organization.
However the Commission said today that it wants to see more evidence that such measures are working.
EU lawmakers are also in the process of drafting new rules for digital services and platforms that could redrawn the line of liability and heap new responsibilities on tech businesses related to the content they host. A draft of this incoming Digital Services Act (DSA) is slated by the end of the year, after a public consultation kicked off last week.
“The coronavirus pandemic has been accompanied by a massive ‘infodemic’,” commissioner Josep Borrell said at a press briefing today. “We have witnessed a wave of false and misleading information, hoaxes and conspiracy theories, as well as targeted influence operations by foreign actors.”
Borrell gave examples of disinformation that risks public health which the Commission has seen spreading online in Europe such as bogus claims that drinking bleach can cure the coronavirus or that washing hands does not help.
He also pointed to vandalism of 5G infrastructure being fuelled by COVID-19 conspiracy theories.
“Some of this is aimed at harming the European Union and its Member States, trying to undermine our democracies, the credibility of the European Union and of national authorities,” he added. “What is more, disinformation in times of the coronavirus can kill. Misleading health information, consumer fraud, cyber crime or targeted disinformation campaigns by foreign actors pose several potential risks to our citizens, their health, to their trust in public institutions.”
Commenting in a statement, the Commission’s VP for values and transparency, Věra Jourová, added: “Disinformation waves have hit Europe during the Coronavirus pandemic. They originated from within as well as outside the EU. To fight disinformation, we need to mobilise all relevant players from online platforms to public authorities, and support independent fact checkers and media. While online platforms have taken positive steps during the pandemic, they need to step up their efforts. Our actions are strongly embedded in fundamental rights, in particular freedom of expression and information.”
“I believe that the fact that worked with the platforms and we designed with them the Code of Practice on disinformation helped to roll out new policies quicker,” she said, discussing coronavirus disinformation and what more platforms need to do, during a press briefing.
“Again platforms need to do more and our Code was just the first step. There is room for improvement. For instance we know only as much as platforms tell us — this is not good enough. They have to open up and offer more evidence that the measures they have taken are working well. They also have to enable the public to identify new threats independently. We invite them now to provide monthly reports with more granular information than ever before.”
Removing financial incentives for those who seek to benefit from disinformation is “crucial”, per Jourová, who said the Commission is taking steps to “gain a better understanding of the flow of advertising revenues linked to disinformation”.
“We need to ensure transparency and accountability,” she added. “Citizens need to know how information is reaching them and where it comes from.”
Jourová announced that TikTok has agreed to join its EU Code of Practice on disinformation — saying she expected it to conclude the formalities “very soon”.
She added that the Commission is also “negotiating” with Facebook -owned WhatsApp about signing up.
She emphasized that EU lawmakers are not asking platforms to take down general disinformation (with some exceptions related to COVID-19; such as where bogus products or advice might cause public harm) — but rather to surface quality, fact-checked information so users are able to get the facts for themselves.
“Twitter is a very good example of what we support,” she said. “Twitter did not remove any declaration of president Trump they just added the facts. And this is what I call plurality and possibility of the competition of free speech. Because we should not rely on just one authoritative declaration when it’s possible to add some facts which might look at it from a different angle. So this is the competition of speeches.
“We never wanted the platforms to remove the content — unless, and here comes the COVID-related situation — unless it is manifestly and clearly harmful to the health of the people. Which is the case of many strange advices and dangerous advices were published through social media.”
During the press briefing the commissioners were pressed on how little resource the Commission has is putting in to disinformation task forces — with an annual strategic communication budget of only around €5M last year.
Jourová responded by saying that the system of collaboration it’s established to tackle the problem is fed by pooled resources from EU Member States, civic society and the platforms themselves.
“The platforms are investing a lot in creating the task forces, their special units to fulfil the commitments — what we expect from them to do also in this communication — we are engaging civil society and fact checkers, we are engaging the research sector. So you have to speak about much wider field and many other capacities which we are deploying to do that,” she said, adding also that in the COVID disinformation context the health sector is also being engaged to combat junk content.
“I have always said that the fight against disinformation is not about censorship — it’s not about removing the false claims and removing disinformation and misinformation. Those who are responsible for the subject has to proactively defend their facts, has to proactively bring trustworthy information,” she continued.
While disinformation is not generally considered illegal across the EU (with some exceptions in certain Member States), Jourová argued that fakes “can cause significant harm” — though she also suggestion the Commission will avoid laying down any hard legal lines here, as it works to update digital regulation.
“For the disinformation, our logic will be to look into how big the potential public harm might be,” she said, giving a hint of how it’s looking at the issue in relation to the forthcoming DSA. “I do not foresee that we will come with hard regulation on that. Because it is too sensitive to assess this information and have some rules — it is playing with the freedom of speech and I really want to come with a balanced proposal. So in DSA you will see the regulatory action very probably against illegal content — because what’s illegal offline must be clearly illegal online and the platforms have to proactively work in this direction. But for disinformation we will have to consider the efficient way how to decrease the harmful impact of disinformation.
“We will focus on its impact before elections, because we see that disinformation — well targeted and designed — can do harm to the free and fair elections. So these are very serious issues we will have to cover.”
Jourová warned that the next health-related disinformation battleground in Europe will be vaccination.
She also named China and Russia as foreign entities that the Commission has confirmed as being behind state-backed disinformation campaigns targeting the region.