An illustration of an evaluation process to suggest a student success framework.

One of the key issues faced with effective Student Success work is establishing which activities support the reduction in attainment, continuation, and completion gaps of an institution. The Office for Students (OfS) desires causal evaluation where possible but randomised controlled trials (RCTs) are labour and cost intensive as well as potentially problematic ethically. At the University of Kent, we have designed a Student Success Evaluation Framework that incorporates a Theory of Change model, mathematical analysis, and contribution analysis to create casual chains for intervention styles. This then allows us to establish which interventions have strong links to causality, without the burden of running an RCT.  

For this framework to work, we have developed strong monitoring processes within departments and academic schools that works on an individual student basis. This has removed the previous headcount methodology and qualitative feedback which, while serving its place, was impossible to determine causality from. By tracking which individual students engage with and following activities from the planning stage, through their activity and to post-activity feedback stages, we can establish whether the activities: 

  1. Reached the students they had planned. 

  2. Had the impact on attainment or attendance they had planned. 

  3. Had obtained qualitative feedback related specifically to the planned outcomes.  

Formalising the process and being able to provide thorough impact evaluation back to departments results in staffs’ time and effort focussing on impactful activities which then increases their engagement. Since so much of Student Success work requires the support and work of academic and professional services staff alike, this is a critical factor in getting the work done. 

It is important to note that currently this framework applies only to student activities, this is just one part of the puzzle in our overall Student Success work aimed at supporting students while structural change takes place. Future work includes developing this further to incorporate staff interventions and further still, significant structural changes within the education format.

Theory of Change 

Applying a Theory of Change approach, we can easily embed evaluation thinking at the beginning of the activity design process. This supports staff to critically assess what they are hoping their activity will achieve and ensure that the activity design is likely to achieve with this with the current knowledge we have. It also enables local targets to be set for a department to deem the activity as successful, which may be different from the institutional targets that we will set centrally. Such a methodology encourages collaboration between departments and the Central Student Success Team as we can incorporate local knowledge and experience into the central integrated approach.

 

Mathematical Testing 

The mathematical testing is a comprehensive step that models the changes in attainment and attendance from one year to the next for each stage and department. By considering the changes, rather than the absolute quantities, we can make arguments for activities increasing a students’ attainment or attendance. This is establishing where we have value-added activities and in latter stages of the evaluation framework we ascertain if these activities reached our students in our target groups. 

Within this step, we also work to combine similar activities that happen across different areas of the institution, thus enhancing the population we are examining to support greater strength in our arguments.  

We run multiple tests to establish if styles of activities are impactful in certain stages, departments, protected groups to focus on from our Access and Participation Plan, and all different combinations of these. It is useful to note when an activity style supports increased, for example, attainment in a whole department, but we only progress these styles if they are impactful for one of our protected groups. We have a natural control group as the students who self-select not to engage in activities and our natural treatment group is those students who elect to engage. There are particular activities targeted at certain groups, but it is exceedingly rare to exclude specific student groups from engagement of anything.

Contribution Analysis 

Activity styles that come through as statistically significant then undergo contribution analysis. In this stage we explore all the plans, departmental reports, additional evidence, and feedback aligned with any activity that falls under a respective style; one such style may be, for instance, Academic Advising. Scoring each activity on its certainty, robustness, prevalence, and range (all terms in the context of Contribution Analysis) we find an average score for that activity style. These scores relate to the strength of our causal chain, those activities that are mathematically impactful, that is, they show positive correlations between increased attainment or attendance and participation in this style of activity, and planned the activity to increase the attainment and attendance of this protected group, reached that group, used the activity to support other activities and obtained good qualitative feedback relating specifically to the activity supporting that group, have the strongest causal chain.  

We further strengthen the causal chain by multiplying the average score of the intervention style by the statistical power of the test that we ran. With higher power tests, we can have more confidence in the findings of the mathematical testing.

Reflections 

This framework does require some structural amendments to how an institution may previously have run student activities. However, one major strength is that, once you have unique identifier data in an attendance list for an activity, you can leverage the wealth of student data that your institution holds to quickly and efficiently reduce the structural inequalities that exist. Given the time constraints that everyone works under, we also feel it essential to be able to direct staff within departments to activities that we have already found to have positive impacts rather than designing activities solely on the basis that we would hope they make improvements. We then have a core group of activities recommended to departments and we supplement departmental innovative activities around these. As we iterate the evaluation over the years, our core impactful activities will expand as we begin to include those new initiatives into the fold.

For further information about the University of Kent’s Student Success Evaluation Framework, you can read the authors’ paper: Student Success Evaluation Framework: Determining Causality in Activities to Improve Attendance and Attainment.

Premium Webinars

Previous
Previous

Setting Out to Succeed: Insights from the University of Leeds’ Student Success Strategy

Next
Next

Evaluating Impact: Your Approach to Monitoring and Reviewing Evidence Following TEF 2023