Just another ComMetrics – social media monitoring, best metrics, marketing metrics weblog

1 ENISA – awareness raising study – what it can tell us

September 6th, 2007 · 3 Comments ·


Under Working Program 2.1.2 — ENISA budgeted Euro 23,000 for an Awareness Study
Who did the study: PricewaterhouseCoopers LLP (PwC)
Who partcipated in the study: some firms from 9 European countries
What new ideas and facts does the study reveal: interesting ones indeed – see below.

We have pointed out in various postings how important awareness raising is. Now ENISA has released a study that deals with this issue.ENISA and its Management Board should be applauded for embarking on this important journey.The study was conducted between May – July 2007. Have a look at this report. It provides an important contribution to the current discussions about how we could improve measures used to assess the effectiveness of awareness programs in the information security domain.
Some questions and facts about the RESEARCH METHODOLOGY used in the study

Focus Survey says: Challenge is:
Sample convenience sample, i.e. not representative selection (see Preface) we cannot generalize from the findings
Organizations 67 public and private organzations
less than 50 to >10,000 employees
because there is no controlling of responses based on type of organiazation plus size, we cannot interpret from the report how these may have affected the responses
What kind of survey instrument was used? Neither a copy in the printed nor electronic version of the report published.From the results reported it seems the survey required responsents to make a choice – positive or negative for each question.The report does not say if the respondents filled out the survey or the person interviewing them on site or via telephone.

So far ENISA has not made the survey instrument public.

Not giving respondents a natural mid-point in a scale is not recommended.For instance, this can be achieved by using a five-point or seven-point scale ==> mid-point = 3 or 4
With a mid-point the anchor used would than be something like undecided and not force respondent to either agree or disagree.

This weakens the validity of the findings unnecessarily

Interviews conducted it seems as if the 12 people interviewed were selected from the 67 respondents who filled out the survey previously.Not clear if structured (=pre-set of questions – same for all respondents) or non-structured interviewing techniques were used (free for all) according to what the report tells us.

Were interviews taped and transcribed lateron or did the interviewer take notes by hand?

Based on the 12 ‘case studies’ undertaken it is unclear how the researchers could ever claim to have discovered key performance indicators (KPIs) or quantitative or qualitative assessment methods for investigating awareness raising programs’ effectiveness (as stated in Preface).Qualitative analysis of interview data is extremely time consuming…. considering the budget size we are not sure if this work could be done satisfactorily
Who was surveyed or interviewed? Nowhere does it tell us much if anything about such socio-demographic variables as:age of respondents, position held, highest education achieved, industry of firm (e.g., finance, manufacturing, services, government) etc.

Preface states: ‘…structured questionnaire. This was made available on a self-select basis to people responsible for information security in Eurpean government departments and companies.’

In a large corporation it is rarely the people in information security that run awareness campaings. Even if those individuals are responsible they tend to do this important work together with other experts from human resources, communication as well as the compliance office, and the privacy officer.In small organizations, it may just be the system admin person or the human resource manager.
Not controlling for socio-demographic variables makes assessing the significance of the findings a challenge.
Cross-national issues The preface states that ‘The study has focused on cultural change, … ‘…. and how assessing methods (qualitative and quantitative ) can contribute to the development of a wider culture of security .’Respondents came from 9 countries – does not say how the 67 respondents and 12 interviewed people were distributed across these nations.

Also, how the instrument was translated to avoid possible bias and misunderstandings is not known.

ot distinguishing between culture vs. cross-national issues confuses the matter.The study talks about cross-national issues at best.It is not clear how if at all controlling variables were used to assess differences across countries.Regulation and standards in financial services will make things different compared to the food industry. Unfortunately, it is impossible to see from the results how this issue was being addressed.
Non-response bias? There is no information provided in the report telling us if the responding organizations differed in any way from those that refused to participate or did not respond by just ignoring the request for filling out the survey.Neither do we know how the participants were selected nor what percentage of those asked did respond. These data are provided in research as standard practice for reasons that go beyond the scope here.

If we go through the survey one sample question published on p. 5 is as follows:- How important or unimportant is it to your business to ensure that staff are aware of each of the following information security topics or risksMost respondents (see p. 5) seem to indicate that it is important or very important. But what does this tell us, will this reduce the incident rate of data security breaches or PCs infected by rootkits. Moreover, since this is what is called a double-barreled question (see Seymour Sudman and Norman M. Bradburn) we do not know if the respondents answered in regard to security topics or risks.

Either risk or security topics should have been used in the question but not both terms.

On p. 12 we see findings regarding a question that was posed.

What techiques have proved effective at raising information security awareness? (categories one had to answered included such as classroom training, induction process/appointment letter, security policy/staff handbook, poster campaings, regular e-mail or newsletter, and so on).

Again, the above question is very important indeed. But how is one to interpret the findings? What do we mean by effective – a reduction in computer virus infections or what? In fact, for every respondent it could mean something different. In turn, how can their answers be compared or give us any meaningful results?


We have pointed out before that Key Performance Indicators or KPIs are critical when looking at awareness campaigns and prevention of vulnerability exploits and malware infections:

EISAS and ENISA – biggest challenge are the Key Performance Indicators – KPIs

However, as we have also stressed previously, methodology is an important issue in any study that tries to explore what kind of measures should be used. Addressing methodology properly from the start will enable one to develop KPIs that make sense AND can be used across various contexts (e.g., industry, country, size of firm):

Security metrics – does methodological preciseness help?

P. 20 is the most important page of the report but it is also maybe the most dissatisfying one. It promises much but it fails to deliver it seems. The KPIs presented are vague and based on one question presented on p. 19:

What metrics have proved effective at measuring the success of information security awareness activities?

Response categories are:

Number of security incidents due to human behavior, Audit findings, Results of staff surveys, Tests of whether staff follow correct procedures, Number of staff completing training, Qualitative feedback from staff, etc.

This does not appear to give one much information to be able to put forward KPIs that can be used across contexts (e.g., type of organizations, SMEs and multinationals, countries, language regions, etc.).

We will tell you more about the study soon. We provided ENISA with a pre-publication copy of this posting via eMail and got some very helpful answers back. We will share some of them with you in the next post. So stay tuned.


Press release (2007-08-22). How to measure success? ENISA presents the 1st report on current EU practices and assessing the success of information security awareness raising activities.

Information security awareness initiatives: Current practice and the measurement of success (July 2007). Heraklion, Crete: European Network and Information Security Agency (ENISA).

You should definitely have a look at this report. It provides an important contribution to the current discussions about how we could improve measures used to assess the effectiveness of awareness programs in the information security domain.


To make it more convenient for you to take advantage of CyTRAP Labs’ offerings, just provide us with your e-mail address below. You can personalize your subscription to make it suit your needs.


→ 3 CommentsTags: enisa · enumeration · methodological · nonteheless · organization’s · partcipated · preciseness · pricewaterhousecoopers