Metrics and key performance indicators

AuthorJordan Hill - Malin Carlberg - Richard Procee - Iva Plasilova - Marion Goubet
Pages89-95
Assessment of th e im plem en tat ion o f t he Code of Pract ice on Disinform at ion
89
Dir ect or at e-Gen eral for Comm uni catio ns Net wor ks, Co nten t an d Tech nolog y
CHAPTER 7: METRICS AND KEY PERFORMANCE
INDICATORS
This chapter discusses metrics and key performance indicators proposed by the
research team as suitable for measuring change in the phenomenon of disinformation.
As such, this chapter attempts to address one of the key criticisms of the Code of
Practice, which is that in its current form, it is not possible to measure objectively the
progress of the Code’s implementation by each of its signatories.
At the time of the initial implementation of the Code, a set of draft monitoring indicators
were initially proposed and adopted, however, these were not designed to detect
medium- or longer-term results or outcomes with regards to the Code’s commitments.
Rather, the purpose of these indicators was to guide the annual self-assessment reports
completed by the signatories.
Several steps can be taken to ensure future assessments of the Code and disinformation
in general. These can be based on further quantitative data that was not collected during
2019, the Code’s first year of implementation.
Firstly, the logic model and evaluation question framework, developed and validated as
part of this study’s Evaluation Plan, constitute two basic methodological tools for future
research, evaluation and monitoring. Naturally as the Code might evolve and bring on-
board new Signatories, these tools will need to be re-tested and if found in need of an
update adapted before being re-deployed. In this regard, the logic model and
evaluation framework should be considered as ‘living documents’ which should
document and reflect the aim, activities and outcomes of the Code.
Secondly, deploying a robust set of Key Performance Indicators (KPIs) would also help
address the somewhat piecemeal and differentiated implementation (i.e. across the
Pillars, across platforms and across Member States), which is one of the key findings of
this report.
Regular assessment of the KPI data in addition to regular monitoring could allow for
the Signatories to review their performance against the KPIs in real time and to when
necessary adapt their activities. Thus, the KPIs would constitute a tool for gauging
progress on an ongoing basis while also providing evidence towards more
comprehensive evaluations of the Code’s effectiveness. These KPIs would also allow
monitoring of the development and spread of disinformation as such.
Generally, when policymakers or evaluators design KPIs, they should adhere to RACER
criteria. This means that KPIs should be Relevant, Accepted, Credible, Easy and Robust
in order to be effective:
Relevant: the data collected should be relevant to the objectives set (i.e.
encompass all five pillars of the Code);

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT