The oversight of VLOPs is mainly the European Commission’s responsibility. National-level Digital Service Coordinators (DSCs) also play a role. These essential national bodies are scheduled to be created by February 2024.
Given the volume of data generated by the DSA in the form of “mandatory or voluntary transparency and evaluation reports, databases, activity reports, guidelines, codes of conducts and standards,” both the Commission and DSCs will need strong data science expertise to carry out their oversight roles.
While the Commission will be supported by the European Centre for Algorithmic Transparency to ensure that algorithmic systems comply with the risk management, mitigation and transparency requirements in the DSA, it is less clear if member states’ DSCs will immediately have the in-house capacities required of them. Their roles include checking the compliance of VLOPs established in their countries, approving/revoking trusted flagger status, vetting researchers requesting access to platform data, approving out-of-court dispute settlement bodies, and receiving and analysing complaints about possible infringements. As such they will require broad expertise, covering data science, content moderation, platform risks assessment software engineering, behavioral psychology and law. There may therefore be a need to outsource to third-parties. For example, given that Alphabet, Amazon, Apple, Booking, ByteDance, Meta, Microsoft, Pinterest and Twitter are already established in Ireland, third parties looking to take advantage of potential outsourcing opportunities from the Ireland DSC would do well having a presence there.
The DSA also requires civil society to play an oversight role. Users will be able to flag content via the “notice and action” tools that platforms must create (Article 14). Trusted Flaggers (Article 22) will also detect, identify and notify platforms of illegal content and have their notices prioritised. While platforms will create their own notice and action tools, Trusted Flaggers will need to standardise how they exchange notices with platforms to highlight illegal content, which may include disinformation.
Disputes are likely to occur over platforms’ decisions in relation to removing or disabling access to information; suspending or terminating service provision to users; suspending or terminating user accounts; suspending, terminating or restricting users’ ability to monetise content. When users have a dispute that cannot be settled via platforms’ internal complaints handling mechanisms, they may go to an out of court settlement body (Article 18) and represent themselves directly or via a third party like a Trusted Flagger. They will likely need dossiers of evidence to make their case to such a body. Trusted Flaggers will also likely need to keep evidence logs that can stand up to independent audits if doubts about their precision, accuracy or independence trigger an investigation.
There may therefore be use-case for digital tools that support the work of Trusted Flaggers. Such tools can support trusted flaggers by: encouraging users to submit violation claims for further investigation; allowing trusted flaggers to assess those claims, supported by AI; enabling them to check if content is present in an existing database of fact-checks or other content; creating a detailed log of the content, claim, assessment, supporting evidence, action required; and standardising their submission of notices to VLOPS (see Tremau for an example).