Final item bank features and considerations for its future use

AuthorCastaño, Jonatan; Centeno, Clara; Jakobsone, Mara; Kluzer, Stefano; Troia, Sandra; Vuorikari, Riina; Cabrera, Marcelino; O'Keeffe, William; Zandbergs, Uldis; Clifford, Ian; Punie, Yves
Pages47-49
DigCompSAT: A Self-reflection Tool for DigComp
47
Chapter 6. FINAL ITEM BANK FEATURES AND CONSIDERATIONS FOR ITS
FUTURE USE
6.1 The Item bank
The median time for test-taking in Pilot 2 was significantly shorter than in Pilot 1: 23 minutes compared to
27 minutes for the first pilot. 7 The revised Item Bank therefore matches e ven better the desired test-taking
duration.
At the same time, we have very good results from statisti cal analysis. The Cronbach’s Alpha values for the
SAT total in Pilot 2 is an excellent 0.987 and the values for the five areas are all above 0.93.
Besides, no items should be definitely removed based on the other statistical indicators from Pilot 2 res ults:
none reached the 80% difficulty indicator threshold that we used in Pilot 1;
none was below or equal to the 0.2 minimum acceptable level of the discrimination index (the lowest level
is above 0.3 for 1 item)
and correlation analysis pointed at items that might be considered redundant, but a much larger test might
be needed to reach more stringent conclusions.
The analysis of unclear items has identified 2 items for which a relatively large number of respondents chose
the answer “The question is unclear to me”:
Q48 “I am interested in understanding how a task can be broken down into steps so that it can be
automated.” (68 people who found them unclear),
Q40 “I am keen to create new digital content by mixing and modifying existing resources. (41 people who
found them unclear).”
Interestingly, both are Attitude items and it is difficult to assess whether such high response rates reflect the
fact that respondents have flagged them unclear because they did not have a good enough level of digital
competence to understand what they meant, or because they lacked additional context/infor mation to
understand what they referred to (especially Q48 is related to 3.4 Programming competence), or because they
were worded ineffectively, or a mix of these reasons.
In any case, in the light of the above considerations, we agreed with JRC to keep th e Pilot 2 Item Bank as the
final one, but to improve the statements of Q40 and Q48 and to make some changes (different wording, adding
examples and context) to another 5 items which got a relatively high rate o f “unclear to me” answers (Q5, 34,
39, 62, 78), with the aim to enhance their understandability.
The final version of the Item Bank with 82 statement and the agreed revisions can be found in Annex 13 -
Final Item Bank in English, Latvian and Spanish.
6.2 The DigCompSAT Report
The screenshot below (Figure 5.) shows an extract of the final “Digital Competences SAT report” that was
presented to users after completion of the SAT in Pilot 2. All competences and competence areas were covered
in this way. There was colour-coding indicating competences that users should focus upon with simple
associated statements. In t he User Experience feedback from users, 62 felt that this was “just right” in the
level of detail provided. 19 felt there a was “a bit too much detail”, although this could be that the report
wording may have felt repetitive, especially if the user was largely on the same level across all areas or
competences. Conversely, 13 felt that it was “a little short on detail”.
7 The fact that the mean time was at 34 minutes (only slightly lower than in Pilot 1 reflects the fact that many
respondents at home (especially younger people) most li kely interrupted test-taking more t han once, leading
to unrealistic durations.

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT