How Are We Evaluating our Trainings?

Credits Dirk Slater, Natasha Msonza, Tawanda Mugari, Erin Murrock Last Updated 2016-03

This resource is adapted from the synthesis of the first LevelUp Trainers' Webinar, hosted and faciliated by LevelUp and fabRiders. This session, which took place in February 2016, looked at the different methods by which trainers are currently assessing the impact of their workshops on the safe digital behaviors and habits of their participants."

How Are We Evaluating our Trainings?

Trainer’s Note

The original post this resource was adapted from can be found on the FabRiders FabBlog; the audio recording and slides from the webinar session itself can be found here.



You may also find the article on Encouraging Ongoing Learning And Engagement from the Security Education Companion to be useful.

What Are We Evaluating?

Establishing whether or not a digital security training has been effective is challenging, as it can be hard to prove that the tools and methodologies learned during a training have actually been put to use.

It’s difficult to show a correlation between training, behaviour change and increased security. Likewise, when both individuals and those in their networks are at equal risk due to certain actions or behaviours, it’s difficult to ascertain how secure they actually are. Relying on solely quantifiable information connected to metrics is not that helpful in understanding training effectiveness in a digital security context.

What is the change they are hoping to contribute to?

It’s important for digital security trainers to clearly articulate what they are trying to achieve. Are they:

  • Trying to help a community become safer/more secure?
  • Helping individuals understand how certain technologies or behaviours can make themselves and their networks vulnerable?
  • Showing how behaviour increases risk and contributes to vulnerability?
  • Increasing competency to use a tool? (And also to get their networks to use it as well?)

Critical to understanding your beneficiaries is knowledge and awareness of the current political, cultural and infrastructural contexts they are operating under. And what are their goals? Are they fighting for rights, or advocating for change in laws or policy? Are they trying to end violence/conflict?

Indicators of Success and Change

Above all, it’s frequently very important for trainers to help participants understand how digital security tools and methodologies intersect with and impact their ability to achieve change.

We also need to establish indicators of success for providing digital security training:

  • Is greater confidence in their ability to use technologies safely an indicator?
  • Do we want them to become advocates for security and be able to engage and teach others?
  • Is there a connection between security and the capacity to accomplish change?
  • Do we want them to consider us an intergral member of their network or community?

Real World Examples

Example 1: Digital Society Zimbabwe

Natasha and Tawanda are the co-founders of Digital Society of Zimbabwe (DSZ), both digital security trainers who are based in Zimbabwe. DSZ strives to support all citizens in Zimbabwe to be more digitally resilient - their work and their trainings are informed by the specific threats that human rights defenders face daily in their country.

Natasha and Tawanda shared the approach they utilise to guage the effectiveness of their trainings, using three layers of evaluation:

Layer 1

This focuses on immediate outcomes for the training itself, conducting a training assessment at the beginning and the end of the training to measure changes in levels of knowledge.

Layer 2

This layer goes deeper, looking at needs assessments and threat modelling using exercises during the training. This is often the first exercise they run, and will often determine the rest of their training’s content, based on the outcomes of these exercises.

Layer 3

This layer looks at how learnings have been applied, and the level of behaviour change as a result of the training. These assessments are conducted after a period of time post-training (for example, 3 months following), and can often take the form of a drop-in visit. They find that it’s better to have these visits be less formal, as people are then more comfortable about being candid about their digital security behaviour and practice.

Tawanda and Natasha find that risk assessments are often a more fruitful way of getting to know participants, and they use the results of these to devise the content of their trainings. They try to identify champions within organisations they work with and train, who can help reinforce and instill digital security practices internally among their colleagues and peers.

Example 2: IREX

Erin leads monitoring and evaluation (M&E) efforts for the SAFE (Securing Access to Freedom of Expression) Initiative at IREX, working with her counterparts in five regional centers to better understand and integrate participant learning, behavior and attitude change, and risk perceptions into their training program design.

The SAFE Initiative integrates safety training for media practioners and human rights defenders from Central America, East Africa, Eurasia, Middle East/North Africa and South Asia. They put extra effort into trying to reach people in rural areas, and to engage media outlets working in those areas, on how to be safer.

SAFE views security as a concept comprised of three distinct components: Psychosocial, Physical and Digital. During trainings, they try to run several exercises that can help bring to the surface participant attitudes, awareness, and practices regarding safety, and they try to assess how their training has impacted behaviour. They also focus on identifying more secure means of communication for follow-up during training, before participants leave, and especially in cases where it is otherwise difficult to maintain communication.

Follow Up and Improving Curriculum

It’s important to have a plan for conducting follow-up that is a routine in your training practice.

Be consistent – whether it’s 2 weeks, 2 months or 6 months after the training, as it can give you a framework that allows for comparison between the varying before, during, and after stages of training. To facilitate this, you will need to make sure you know how to communicate and get in touch with participants in the future, in particular by establishing a communications plan with them in-person (when possible).

Finally, we noted during this discussion how an indicator for failure for a trainer is when they think their curriculum or methodologies are perfect. Never set your curriculum in stone, and always ask before every training:

[How can this training be more effective? How can I be a better trainer?](

In answering these, you should engage other digital security trainers on how they might answer those questions for themselves.