Policy Brief
Tackling disinformation: the EU Digital Services Act explained
11 November 2023
11 November 2023
The European Union has long recognised the role of online platforms as gatekeepers of information, and has worked to tackle disinformation by creating a regulatory framework that holds platforms accountable for the way they promote harmful content.
The EU’s passing of the Digital Services Act in 2022 is its most comprehensive and ambitious effort to create a legislative framework designed to curb illegal and harmful content online, including hate speech and disinformation. It is supported by the Code of Practice on Disinformation, first created in 2018 and updated in 2022, which proposes a set of practical measures that online platforms can adopt to demonstrate compliance with the Digital Services Act.
This policy brief highlights areas where the Digital Services Act and Code of Practice on Disinformation open space for tools that help identify and analyse disinformation and malign actors, and that assist in automating the workflow of content moderators. There are significant opportunities for AI and automation tools to aid oversight bodies, platforms, advertisers, and fact-checkers in implementing the Digital Services Act. Potential use cases include:
Effective solutions will require iterative testing, input from human moderators, and collaboration between stakeholders. If developed responsibly, new tools can increase the efficiency and efficacy of efforts to limit the impact of online disinformation in Europe and beyond.
The Digital Services Act (DSA) aims to increase transparency on how big tech companies like Google and Meta operate and hold them accountable for their role in disseminating disinformation and other harmful content. The overall goal is to foster a secure digital environment where users’ fundamental rights are safeguarded and balance it with fair competition for businesses.
The DSA is a landmark piece of legislation. Past efforts to make platforms pay for their business models’ negative social externalities have generally failed because they were caught up in the cul-de-sac debate on liability for third party content. Now regulators and researchers will be empowered to look under the algorithmic hood at how platforms promote and moderate content.
Under the DSA, the biggest online platforms and search engines, defined as those with more than 45 million monthly users in the EU, will be categorised as either “very large online platforms” (VLOP) or “very large online search engines” (VLOSE). So far, the EU Commission has categorised 19 sites as falling over this threshold. They must comply with a set of obligations in terms of combatting risks associated with the handling of different kinds of content such as hate speech, violence against women, the sale of counterfeit and illegal goods, and disinformation.
Article 34 of the DSA obliges these sites to conduct in-depth annual risk analyses of the severity and probability of their services:
They must implement and place robust mitigation measures tailored to the specific risks identified. To curb them, the DSA suggests that platforms adapt their content moderation and recommender systems, cooperate with “trusted flaggers,” sign up to codes of conduct like the Code of Practice on Disinformation, and label false information and “deepfakes,” among other measures.
Designated sites must also share data with researchers and oversight bodies to monitor and assess compliance[1]. This is designed to help the public better understand the systemic risks posed by online platforms, how they can be confronted, and how effective platforms’ risk mitigation measures are. The result could be a stronger grasp on the sources, tactics and techniques used in disinformation, and a better understanding of the algorithms that recommend disinformation to users and their impact.
The designated platforms must comply with the new rules that were initiated 25 August 2023. They can expect to be penalised if their content moderation systems fail to suppress, or indeed amplify harmful content. Fines for non–compliance can reach 6% of companies’ annual turnover. Repeated non-cooperation and non-compliance can lead to a ban on operating within the bloc.
Company | Digital Service | Type | |
Search | Alphabet | Google Search | VLOSE |
Microsoft | Bing | VLOSE | |
Social media | Alphabet | YouTube | VLOP |
Meta | VLOP | ||
Meta | VLOP | ||
Bytedance | TikTok | VLOP | |
Microsoft | VLOP | ||
Snap | Snapchat | VLOP | |
VLOP | |||
VLOP | |||
App stores | Alphabet | Google App Store | VLOP |
Apple | Apple App Store | VLOP | |
Wiki | Wikimedia | Wikipedia | VLOP |
Marketplaces | Amazon | Amazon Marketplace | VLOP |
Alphabet | Google Shopping | VLOP | |
Alibaba | AliExpress | VLOP | |
Booking.com | Booking.com | VLOP | |
Zalando | Zalando | VLOP | |
Maps | Alphabet | Google Maps | VLOP |
Source: Martin Husovec, The DSA Newsletter #3
The Digital Services Act itself is somewhat vague on how platforms should tackle disinformation. For the largest sites to demonstrate that they are mitigating risks, DSA Article 45 recommends that they sign up to the bloc’s voluntary codes of conduct, which go into detail on how different types of illegal content and systemic risks can be discouraged.
The Code of Practice on Disinformation, which was introduced in 2018 and further strengthened in 2022, is one of those codes and thus provides more concrete detail on how platforms should tackle disinformation. It contains 44 commitments and 128 specific measures to guide action. Forty-four companies have signed up, including players from the advertising ecosystem, fact-checkers, and organisations with specific expertise on disinformation.[2]
Twitter notably withdrew from the code in June 2023, prompting a strong reaction from the EU. “You can run but you can’t hide. Beyond voluntary commitments, fighting disinformation will be a legal obligation under #DSA as of 25 August 2023,” EU Internal Market Commissioner Thierry Breton tweeted. “Our teams will be ready for enforcement.”
Twitter leaves EU voluntary Code of Practice against disinformation.
— Thierry Breton (@ThierryBreton) May 26, 2023
But obligations remain. You can run but you can’t hide.
Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25.
Our teams will be ready for enforcement.
Although the code is voluntary, companies can therefore expect legal consequences if they are not seen to be doing enough to tackle disinformation. This will likely be determined based on the degree to which they have achieved the performance indicators tied to the commitments and measures they pledged to uphold while signing onto the Code.
One area in which regulators want to see action is in advertising, specifically identifying and blocking disinformation actors from placing ads.
Practical measures in the DSA and Code aid in the identification of malign actors. Article 39 of the DSA, and Commitment 10 of the Code, mandate that platforms with advertising systems create searchable, real-time ad repositories. These repositories should ensure public access to advertisements and must contain information on the content, the entity on whose behalf it was presented, who was responsible for payment, and the intended audience and its reach, among other data points. The DSA mandates that the data be available for one year after the ad was served, while the Code requires that it be available for at least five years. The Code calls for platforms to create APIs that enable customised searches for data (Commitment 11) and tools and dashboards to enable civil society oversight (Commitment 12).
As a result, there are opportunities for AI-powered tools to be developed that scrape and analyse those repositories. This would permit researchers to gain a better grasp of disinformation trends in advertising that have negative impacts on society. It would also allow the identification of accounts that repeatedly publish misleading and deceptive ads so that appropriate action could be taken (Commitment 2). Such a tool could also help with the visualisation of ad-based disinformation trends to enable oversight.
Regulators also want to see concrete action to demonetise sites that spread disinformation. Placing ads on such sites is giving those behind them an estimated $2.6bn in annual revenue.[3]
The lack of transparency and complexity of the digital advertising supply chain facilitates this, and there are failures at each stage of the complicated system.[4] Disinformation actors are able to monetise their sites by listing ad slots on ad exchanges, either directly or via supply-side platforms that automate the process for them. Both Supply-side Platforms (SSPs) and ad exchanges are failing to exclude them, and there is currently little incentive to clean up their act: more slots mean more monetisable impressions.[5] Demand-side Platforms (DSPs), which automate advertisers’ bidding on ad slots in the exchange, are similarly not auditing the inventory on offer and excluding known disinformation sites. Brands and agencies are not looking systematically at where their ads are being placed – with overwhelmed executives often failing to read the placement reports sent to them by their SSPs.[6] This is an oversight on their part as it is their duty to conduct due diligence regarding where their ads are published. Finally, many ad verification companies do not have sufficient AI tools to detect disinformation;[7] although, some more advanced AI-powered solutions are available (see Oracle Moat’s contextual measurement tool).
As a result, the EU wants those involved in the selling of ad space, (i.e., SSPs, ad exchanges and DSPs), to clean up their inventory to prevent the placement of advertising on sites that publish disinformation (Measure 1.1). Regulators want these actors to put in place stricter review mechanisms for sites and content submitted for monetisation (Measure 1.2), as well as measures to enable the verification of the landing pages of ads (Measure 1.1). They must give clients transparency on the placement of their advertising (Measure 1.3) and give third-party auditors access to their services and data to verify ad placement (Measure 1.5).
There is space for AI powered tools to be developed to help those involved in the selling of ad space to assess their inventory; to compile reports on ad placement; and to help auditors (e.g., MRC or TAG) verify where ads are being placed. This would support the building of exclusion lists. Brands may also benefit from the development of tools that automate the scanning and analysis of the placement reports sent to them by their SSPs, referencing them against databases of known disinformation sites (see the Global Disinformation Index; NewsGuard).
The Code emphasises the importance of empowering platform users to detect and report disinformation, pointing them to authoritative sources, and raising media literacy.
It encourages signatories to offer features to inform users that content they interact with has been rated by an independent fact-checker (Measure 21.1). It also promotes the implementation of “trustworthiness indicators” developed by independent third parties that rate the integrity of the source or language used (Measure 22.1). Platforms should also offer users features that lead them to authoritative sources on topics (Measure 22.7) and provide fact-checkers with information to help them quantify the impact of fact-checked content (Measure 32.1).
The Code positions the creation of a centralised repository of fact-checks as a key enabler (Measure 31.3) and encourages signatories to explore technological solutions to facilitate the efficient use of the repository across platforms and languages (Measure 31.4).
There are opportunities for third party tools or plugins to be developed to interface between platforms and the fact-check repository, flag content that has been fact-checked or that is “untrustworthy,” and point users to relevant articles within the repository. Such plugins could also send fact-checking organisations impact reports on the number of users clicking through to their articles.
Greater awareness on disinformation trends and the tactics, techniques and procedures used in disinformation campaigns can also contribute to user empowerment. Tools that may be developed in response to the above needs might therefore also consider serving as media literacy hubs, curating guidance on how to evaluate online content (Measure 17.1) and research published on disinformation. Much of this research will be created by platforms themselves, who are encouraged to track and analyze influence operations, and the spread of disinformation (Measure 29.1); and by “vetted researchers,” who under the DSA will be given access to platform data so they may conduct research that “contributes to the detection, identification and understanding of systemic risks” related to disinformation and other harmful content.
The oversight of VLOPs is mainly the European Commission’s responsibility. National-level Digital Service Coordinators (DSCs) also play a role. These essential national bodies are scheduled to be created by February 2024.
Given the volume of data generated by the DSA in the form of “mandatory or voluntary transparency and evaluation reports, databases, activity reports, guidelines, codes of conducts and standards,” both the Commission and DSCs will need strong data science expertise to carry out their oversight roles.[8]
While the Commission will be supported by the European Centre for Algorithmic Transparency to ensure that algorithmic systems comply with the risk management, mitigation and transparency requirements in the DSA, it is less clear if member states’ DSCs will immediately have the in-house capacities required of them. Their roles include checking the compliance of VLOPs established in their countries, approving/revoking trusted flagger status, vetting researchers requesting access to platform data, approving out-of-court dispute settlement bodies, and receiving and analysing complaints about possible infringements. As such they will require broad expertise, covering data science, content moderation, platform risks assessment software engineering, behavioral psychology and law.[9] There may therefore be a need to outsource to third-parties. For example, given that Alphabet, Amazon, Apple, Booking, ByteDance, Meta, Microsoft, Pinterest and Twitter are already established in Ireland, third parties looking to take advantage of potential outsourcing opportunities from the Ireland DSC would do well having a presence there.[10]
The DSA also requires civil society to play an oversight role. Users will be able to flag content via the “notice and action” tools that platforms must create (Article 14). Trusted Flaggers (Article 22) will also detect, identify and notify platforms of illegal content and have their notices prioritised. While platforms will create their own notice and action tools, Trusted Flaggers will need to standardise how they exchange notices with platforms to highlight illegal content, which may include disinformation.
Disputes are likely to occur over platforms’ decisions in relation to removing or disabling access to information; suspending or terminating service provision to users; suspending or terminating user accounts; suspending, terminating or restricting users’ ability to monetise content.[11] When users have a dispute that cannot be settled via platforms’ internal complaints handling mechanisms, they may go to an out of court settlement body (Article 18) and represent themselves directly or via a third party like a Trusted Flagger. They will likely need dossiers of evidence to make their case to such a body. Trusted Flaggers will also likely need to keep evidence logs that can stand up to independent audits if doubts about their precision, accuracy or independence trigger an investigation.
There may therefore be use-case for digital tools that support the work of Trusted Flaggers. Such tools can support trusted flaggers by: encouraging users to submit violation claims for further investigation; allowing trusted flaggers to assess those claims, supported by AI; enabling them to check if content is present in an existing database of fact-checks or other content; creating a detailed log of the content, claim, assessment, supporting evidence, action required; and standardising their submission of notices to VLOPS (see Tremau for an example).
As the information world is changing at lightning speed, there is a need for standards, guidelines and laws to help both consumers and businesses navigate their way forward. The EU’s Digital Services Act and strengthened Code of Practice on Disinformation opens up significant opportunities for the development of new tools and technologies to tackle online disinformation.
As detailed in this brief, there is clear potential for solutions leveraging AI and automation to support various stakeholders tasked with implementing these regulations. From aiding oversight bodies with enforcement to assisting platforms, advertisers and fact-checkers with adhering to transparency, demonetisation and content moderation requirements, intelligent software solutions can increase the efficiency and effectiveness of efforts to curb the spread and impact of false and misleading content online.
Specifically, developers and researchers should consider exploring tools that can automatically analyse ad repositories, verify ad placements, build exclusion lists, assess site trustworthiness, interface with centralised fact-check databases, and streamline content flagging processes. Methodical testing and iterative improvements will be key to creating robust, unbiased and user-centric applications, as will human input in the content flagging and moderation process.
The EU Digital Services Act and Strengthened Code of Practice on Disinformation is desperately needed in such times, not to mention long overdue. Also, close collaboration between technology providers, platforms, advertisers, fact-checkers, academics and advocacy groups will be instrumental to developing solutions that meet the complex challenges outlined in the document. With sound innovation and multi-stakeholder cooperation, impactful progress can be made against disinformation through the frameworks established by these regulations.
* The descriptions of each signatory were generated by ChatGPT
[1] DSA Article 40
[2] See Annex 1
[3] https://www.newsguardtech.com/special-reports/brands-send-billions-to-misinformation-websites-newsguard-comscore-report/
[4] https://www.promarket.org/2019/07/02/how-the-adtech-market-incentivizes-profit-driven-disinformation/
[5] https://www.amobee.com/podcast/misinformation-in-advertising/
[6] https://www.forrester.com/what-it-means/ep273-marketing-funding-misinformation/
[7] https://www.newsguardtech.com/special-reports/brands-send-billions-to-misinformation-websites-newsguard-comscore-report/
[8] https://www.disinfo.eu/publications/room-for-improvement-analysing-redress-policy-on-facebook-instagram-youtube-and-twitter/
[9] https://dsa-observatory.eu/2023/03/10/here-is-why-digital-services-coordinators-should-establish-strong-research-and-data-units/
[10] Ibid
[11] ibid