Annex 6 - Pilot 1: Focus Groups Report
Author | Castaño, Jonatan; Centeno, Clara; Jakobsone, Mara; Kluzer, Stefano; Troia, Sandra; Vuorikari, Riina; Cabrera, Marcelino; O'Keeffe, William; Zandbergs, Uldis; Clifford, Ian; Punie, Yves |
Pages | 91-93 |
DigCompSAT: A Self-reflection Tool for DigComp
91
Annex 6 - Pilot 1: Focus Groups Report
Introduction
This report was written by ICS Skills and provides feedback from two focus groups, which were held on the 28th
and 30th January 2020 as well as feedback from a number of small discussion groups. Focus groups were held
with 15 participants from both the 16-24 cohort a nd 25-54 cohort (34 people in total). Participants who
attended the discussion group were of mixed a ges and b ackgrounds. Notes were taken by the facilitator. A
number of themes have been identified from the discussions and these are outlined in the main body of this
report.
Methodology
Each of the focus groups followed the same interactive format and undertook the same discussions. They
focussed on the following questions, which were provide by the project manager All Digital:
1. Were there any understandability issues with the questions? (What about the clarity of t he
questions? How far did you understand what they are asking for?)
Note: this question was slightly altered in the field to a qu estion about the use of language, including
sentence structure and clarity of meaning.
2. What do you think about the difficulty of questions? Were the questions e asy for you to answer?
(Can you tell examples? Why were some questions hard to answer?)
3. What do you think about the length of the questionnaire?
4. How much did you think this questionnaire helped you to understand the range of digital skills that
you could learn?
5. What kind of changes would you suggest for the system? Why?
The discussions were undertaken in two large groups and one small group notes were recorded by the
facilitator. All those who attended the focus groups fully participated in the discussions and welcomed the
opportunity to do so. They clearly shared a strong inte rest in the assessment tool and to ensuring that it was
fit for purpose.
Outcomes and key themes
The following sections summarise the themes that were identified by the groups. There was a large amount of
commonality in the comments made by the different participants in the discussions.
1. Clarity of the Questions
All of the participants had some issue with the language used in the questions. The variance could be attributed
to age and education. For example, some younger participants said they felt that it was ‘too technical at times
or at least they felt t hat this was why they had difficulty with comprehension. When pressed for an example,
one young participant cited question 31 (I understand the process that leads to the development of a sequence
of understandable instructions that will be implemented in a given programming language). Another cited
question 104. (I can manage my online reputation using SmartR(r) application. What was interesting about this
is that older participants also has issues with these questions but for different reasons. Question 104 was one
of the fake questions, so this obviously received a huge amount of complaints. However, many older well-
educated participants also flagged question 31 for the phrasing.
Numerous participants remarked on questions which they had to re-read a number of times in order to
understand the question and they found this to be extremely irritating. Some examples include questions 9, 41
(both “fake” items) and 51.
One participant took issue with the use of the term “I ought to” (Q95) she felt it would be “better to use ‘I
should’ as no-one talks like that anymore”! T he other participants agreed with this and they also took issue
with the word “tinkering” which they felt was both “inappropriate and unprofessional”. This brought them to
To continue reading
Request your trial