Using developmental evaluation to build data collection frameworks (evaluation case study 1 of 3)
This is the first of three case studies I will be posting about evaluation this week.
In this case study I talk about:
- working on a new program that took a long time to find its legs
- how I was able to incorporate evaluation into the program development process
- how I used administrative processes as a data collection tool.
Read on!
Key points
· Programs working in innovative and changing environments require a particular evaluation approach.
· Working closely with the program team helped me gain a deep understanding of what they were trying to achieve and the rapid changes that were happening.
· Integrating evaluation data with administrative data can be an excellent way to capture evaluation data while minimising the administrative burden for program staff.
· Good data quality does not happen on its own, it needs dedicated attention from staff who continuously monitor data and ensure that the processes are being used.
· Even though I spent a lot of time on activities that did not immediately appear to be evaluation oriented, my work benefited future evaluations enormously.
I was hired as an evaluator for a community service program while it was in its first month of operation. The program was funded for three years and was designed as a piece of innovation that would test a new model of service delivery.
The idea was that I would follow the journey of the program over three years, collect data, write articles and reports about the effectiveness of the program, and then write a final evaluation report at the end. A fairly straightforward process – or so it seemed at the time!
In the early days I attempted to use formal evaluation methods for the program. We started off by doing standard evaluation activities such as developing a program logic and writing a formal evaluation plan, neither of which I ended up using very often during my time as the evaluator.
The reason I rarely used these documents was that the program was new and in a period of instability. People were still working out what the program was supposed to be doing, so the program logic written one day may not be relevant a month later. Mistakes were being made, learning was happening and adaptations were occurring rapidly. As a result, I was becoming increasingly reluctant to use ‘traditional’ evaluation methods.
At the same time, there were significant expectations as to what I could provide as an evaluator. As early as six months into the program, some staff were very keen to show evidence of outcomes being achieved, even though it was far too early to be making statements about the program’s effectiveness.
It was around this time that I read Michael Patton’s book Developmental Evaluation. It was a real eye-opener for me, and totally changed my thinking about the best way to conduct the evaluation of the program.
Patton conceptualises development evaluation as a new paradigm for evaluation.
Evaluators have traditionally conceptualised evaluation into two broad types – formative and summative:
- A formative evaluation is conducted during the project and is intended to contribute to program improvement.
- A summative evaluation is conducted after the program is complete and is intended to comment on the success of the program.
According to Patton, these ‘traditional evaluations’ are intended to be objective pieces of work designed for accountability to management and external funders.
Patton proposes a third way – developmental evaluation – that is designed to be more flexible and adaptive. In developmental evaluation, the focus is internal, helping staff make sense of their program and contributing to program development.
Patton can explain it better than I can, so take it away Patton!
Table: Patton explains the difference between traditional and developmental evaluation
Developmental evaluation in practice
So what did making this theoretical shift mean for me in practice?
- I threw away my program logic and evaluation plan, knowing that neither document could help me evaluate a program where the program goals and activities were constantly changing.
- I spent a lot of time sitting in the social workers’ room. This gave me a lot of insight into how the program was working, what clients thought about the program, and why changes were being made to the program. I kept a journal during this time, which I was able to draw on for evaluation data.
- I worked closely with the manager of the service to document the changes that were happening. This meant I had a place at the table during management decisions. Although I was not directly involved in making decisions, my knowledge of the program meant I was often called on for advice.
I found my evaluation practice taking me in unexpected directions. For example, I became increasingly focused on the use of administrative data and reporting data in evaluation. Because the program was new, they were still establishing administrative and client management processes.
I sat with the social workers and worked with them directly to develop a series of administrative tools including intake forms and forms to capture outcomes (which were required as part of their contracted reporting). I also worked with management to determine determine what the social workers needed to know, what our contracted reporting obligations were, and what we wanted to know for the evaluation.
Most of the time all the pieces of data overlapped. This meant we could streamline a lot of the reporting procedures and reduce the administrative burden for social workers.
At the same time, management decided to roll out a database for the social workers to use instead of the paper tools. The database was proprietary software and the fields did not match with the fields on the administrative forms.
I worked closely with the program coordinator and software distributors to change the database so it reflected our administrative process. We also developed a series of ‘client stages’ in the program, allowing us to keep track of where clients were at, how long they were staying in the program and the length of time between first contact and ‘first outcome’. Now we could get real-time data about what the client population looked like, what kinds of services were being provided to which clients, and the number of outcomes being attained.
As the social workers tested the process and tools, they would come back to me when certain questions did not work and we would update the database accordingly. It was a painstaking process.
Upscaling
Towards the end of my time with the program, the government announced that it intended to upscale the program to five sites and continue funding for another three years. This brought with it a much more detailed set of reporting requirements than in the previous contract.
Because of the work we had done on the database, it was a relatively easy process to redesign the system so that all contracted reporting could be generated by clicking a couple of buttons. To prepare for the upscaling, I worked with the program coordinator to formally document all of the administrative processes and ensure the online reporting was working. The new sites were ready to collect data and use this for efficient reporting.
For the evaluation data, I extracted raw data from the database as an Excel spreadsheet. I then coded the data for analysis in SPSS. The database had over 1000 clients, but we were able to obtain rich data about the client demographics, what was happening for them at presentation (through the intake form), what their journey through the service looked like, and what happened to them after they left the service.
Consent
At this stage, it is important to note the methods we put in place to ensure confidentiality and anonymity:
- We had consent for the use of data for research on the intake form. The social workers were instructed that they were not obliged to ask about consent at point of intake and could ask at a later time.
- Social workers could ‘opt’ a person out of data extraction. This was designed so I would only receive data from people who had provided their consent to participate in research.
- Social workers were encouraged to err towards opting a person out of research if they had any concerns about that person’s capacity to consent, or if the client looked the slightest bit uncomfortable about consenting to research.
- Clients were advised they could opt out of the research at any time without any impact on service delivery.
The future
Several years later, I am still in regular contact with the program manager and coordinator. The database, reporting, and administrative processes have undergone further improvements, but the processes and data collection systems we created remain in place.
The program coordinator has been particularly vigilant in ensuring a high standard of reporting. Without her I am quite sure that the data quality would have declined significantly, as I was instrumental in monitoring data quality during my time there.
The feedback I have received since I left is that this program has the best data collection systems and data quality in the entire organisation (and it is a large organisation!). The external consulting company that is now evaluating the program has access to high-quality client and service data, which I am sure is helping them immeasurably in their evaluation.