Evaluation of of ‘Future Foundation Summer Programme’ 2013: Efficacy Trial

flagsIntroduction

The project to be evaluated is a randomised controlled trial of a 4-week summer school programme from 29th July to 30th August 2013 on three sites. The programme is loosely based on the US BELL Summer School and BELL Accelerated Learning programmes, and is more closely associated with a pilot study of a programme conducted by Future Foundation in the UK in 2012. All three have evaluated their work as a success, and there are indeed many indicators of success in terms of satisfaction and attitudinal measures. However, none has yet convincingly demonstrated a beneficial impact on student learning for the year 5 and 6 age group, as assessed by formal testing. There is near equipoise in relation to the primary outcome of attainment measures. It is therefore appropriate to conduct a definitive test order to determine whether there is merit in such programmes.

ss4Impact evaluation

Design

The outline for the intervention proposes a relatively simple individually randomised control trial of two groups, without placebo. One group will receive the treatment over summer 2013. The pre and post tests will be administered in whole class groups in schools where the pupils attend, and so demoralisation and consequent dropout after revealing the allocation to groups is not a special concern here.

Sample size

The project outline proposes an individual-level randomisation of 1,000 year 5 and 6 pupils, 50% from each year. The project team will recruit 1,000 pupils from families agreeing to be randomised either to summer school 2013, or to a comparison group providing only pre- and post-test scores in 2013. We will use a pseudo-random number generator to select the treatment or other group for each student, after the pre-test for both groups. All schools, students and families will agree to be part of the evaluation as an integral part of being part of the programme.

The pilot for this intervention did not produce a consistent or substantial ‘effect’ size benefitting students who attended the summer school. Nor have studies in the US shown clear advantages in terms of attainment for the age group involved here. We therefore present our effect size calculations the other way around to normal. Using Lehr’s approximation for an 80% chance of detecting a presumed effect size with 5% alpha, and a sample size of 500 cases per trial arm, it should be possible to work with an effect size as low as 0.05 (since 16/e2=500). This means that the trial should be able to detect any effect if it is of practical significance. It has sufficient power in the circumstances.

Tests

The pre-test scores for both groups will be the GL Progress in English Test (version 9 or 10 depending on year group), and GL Progress in Mathematics Test (9 and 10), administered in class groups in the feeder primary schools in each area.

The pre-test will be administered by the pupils’ initial schools, assisted by the project team in and advised by the evaluators. Because this will take place before randomisation, the process will be ‘blind’ as to treatment group. In addition, we will have KS1 assessment results in literacy and maths as secondary pre-test scores.

The post-test scores for both groups will be the GL Progress in English Test (version 10 or 11 depending on year group), and GL Progress in Mathematics Test (10 or 11), administered in class groups in feeder primary schools or year 7 secondary schools. The tests will be administered by members of the evaluation team and their temporary employees (such as doctoral researchers) who will not know which group each pupil is in. Schools will be instructed not to disclose the fact to the evaluators. This is to help ensure that the process is ‘blind’ as to treatment group. In addition, we will have the most up-to-date teachers assessment results in literacy and maths as a secondary post-test score.

Other data

The intervention team will prepare a template for data to be uploaded for all relevant pupils at the outset of the trial. The template includes prior attainment plus background characteristics such as FSM, sex and ethnicity.

ss2                ss

Analysis

The primary outcome measure will be the difference in the gain score between the arms of the trial, expressed as an effect size, where the gain is the average difference between individual scores on tests 9 and 10 (or 10 and 11) for both English and maths. A secondary outcome measure will be the average residuals between the scores on test B and the predicted (modelled) scores based on prior KS1 assessments and pupil background data for each subject. The analysis will also consider FSM (or pupil premium) students separately.

Process evaluation

This intervention has already been piloted, and is developed from work already implemented in the US. Therefore, the fieldwork forming the light-touch process evaluation has the aim of providing some further formative evidence on all aspects of the intervention from the selection and retention of schools, through the training of teachers to evaluating the outcomes. This can be used to help assess fidelity to treatment, and the perceptions of participants including any resentment or resistance. However, the main purposes will be to consider the fidelity and quality of delivery of the treatment, and to advise on issues for any future scaling up if the results permit.

This will necessitate the generation of some additional data from observation and interviews with staff and families, focus groups of pupils, plus observation of training, delivery and testing. These will all be as simple and integrated as possible.

Timeline

December 2012

Recruitment of sites and schools
Further development of intervention and curriculum
Order tests
Train schools for testing

May 2013

Recruitment of potential pupil participants
Observation of staff training
Pre-tests delivered by schools
Upload KS1 and background data
Randomisation of potential pupil participants to two groups

July 2013

Observation of summer school Interviews with all parties

September 2013

Blind administration of post-tests in all schools
Update background data
Analyse outcome data
Synthesise with process evaluation data

October 2013

Complete full report.

On 7th February 2014 a complete report has been published by Education Endowment Foundation. Read the report here.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>