How to assess the effectiveness of the government’s Prevent counter-extremism programme

In the aftermath of the Manchester bombing, many questions were raised regarding the effectiveness of the government’s counter-extremism programmes, with particular focus on the Prevent arm of that strategy. For example, the FT reported that Salman Abedi was referred to Prevent, but that report was not followed up (although note that the police state that they cannot find any record of Abedi’s referral to Prevent). More generally, the Prevent programme has been called “toxic” by the now mayor of Manchester Andy Burnham and the Home Affairs Select Committee.

However, there are a number of examples of the Prevent programme also having done a lot of good. For example, two teenagers were stopped from travelling to Syria after being referred to Prevent by their parents in 2015, and it is credited with helping to stop 150 people (including 50 children) from going to fight in Syria in 2016.

Importantly, much of the discussion regarding Prevent’s effectiveness has focused on anecdotal evidence, the odd stylised fact here and there, a couple of case stories. Most criticisms or praise of Prevent focus on a few examples where it has not worked, been implemented badly, or succeeded. There are even calls to expand or to shut down Prevent without any evidence of whether or not it is actually an effective programme.

Indeed, as far as I’m aware, there has been no (publicly available) rigorous or systematic assessment of Prevent’s effectiveness. (Note that although the recently-launched book “De-Radicalisation in the UK Prevent Strategy: Security, Identity and Religion” by M. S. Elshimi claims to constitute such an assessment, its results are based on an absurdly small sample of only 27 people and therefore cannot be considered a systematic analysis.) However, conducting a systematic assessment could be a relatively simple procedure.

In particular, if data are available on the number of extremist / terror convictions and/or number of people successfully and unsuccessfully “treated” by Prevent at the level of individual local authorities, then it would be possible to use variations across those local authorities to assess Prevent’s effectiveness.

Put simply, the “outcome” variable (i.e. metric that assesses Prevent’s success) could be the number of extremist / terror convictions or the proportion of people referred to Prevent that are successfully treated. (Obviously if the number of convictions is used, it would be important to allocate those convictions to the local authority in which the extremist grew up and/or resided rather than where the extremist activity was carried out.) Of course, there would also be technical considerations regarding whether the outcome variable is a “count” variable, is bounded due to being expressed in percentage terms etc., but those can be dealt with relatively easily.

The explanatory “variable of interest” that would then measure the actual effectiveness of spending on the Prevent strategy would be each local authority’s annual budget for Prevent. If Prevent was effective, one would expect this variable to be negatively related to the number of convictions (since a successful Prevent would stop people before they committed a crime) and positively related to the proportion of successfully treated people. Alternative variables of interest could include the number of Prevent-dedicated personnel in each local authority or the amount of Prevent training that is provided to practitioners – each of these could be investigated to try to identify the most effective/important aspect of the Prevent strategy.

Note that it is unlikely that there would be any simultaneity between the Prevent budget (or other variable of interest)  and the outcome variable – although it is plausible that current Prevent spending would be based on past extremist activity in a local area (i.e. local areas with higher extremist activity get more money for Prevent), it is unlikely to be the case that current Prevent spending reacts quickly enough to be affected by current extremist activity. Nonetheless, this could be investigated by using lags of the Prevent budget variable as instruments or as the variables of interest themselves (since it could well be the case that Prevent takes time to have an impact).

As Prevent has been running since 2003, and there are roughly 400 local authorities in the UK, that should give a sizeable panel of data on which to conduct some relatively simple regression analyses. Of course, a number of other factors would need to be taken into account – for example, the population of each local authority, the average income within it, any changes to Prevent guidelines and/or the introduction or suspension of other counter-extremism strategies. The “identification” of the impact of Prevent would therefore come through variation in Prevent spending (or other Prevent-related variables of interest) and outcomes across local authorities and across time.

Now, I don’t have the data to conduct this analysis. However, I suspect that the data are out there – I understand some organisations have created databases containing details of all extremist-related convictions over a reasonably lengthy period of time (for example, The Henry Jackson Society has a dataset on all Islamist-related convictions from 1998-2015, but this would also need to be supplemented with data on other forms of extremism covered by Prevent).  Moreover, local authorities / the Home Office / the relevant government authority no doubt have records of the amounts that were spent on Prevent by local authorities (as well as the number of Prevent-related personnel etc.) on an annual basis . As such, combining the two together (along with the various controls) should provide a useable dataset fairly easily.

Hence, if the government and/or organisations with an interest in Prevent really do want to assess how effective is the Prevent strategy, then it actually isn’t very difficult to do so.

Advertisements

Evidentiary standards are slipping

Over the past month, there have been a number of instances in which a politician or journalist has made a bold claim, and then ignored or been unable to provide any evidence to support those claims.

For example, Fraser Nelson claimed that being in the EU had been a net detriment to the UK’s trade, and that the evidence he had seen supports that view. However, when provided with evidence that contradicted his claim, and when challenged to provide the evidence to which he referred, Nelson did not provide any sort of response. Likewise, Michael Gove claimed that there was evidence to indicate that leaving the EU would provide the UK with a “net dividend”. However, when pressed to provide the evidence that he claimed existed, Gove did not do so; nor did he respond to the provision of evidence that contradicted his view.

This is not just a problem for right-leaning opinion makers either; it affects left-leaning ones just as much. For example, despite copious evidence (from the Low Pay Commission) that increasing the minimum wage too high would be detrimental to the employment rate of low-income earners, Jeremy Corbyn claimed that increasing the minimum wage to £10 per hour would raise their living standards.  Again, Corbyn provided no evidence to support his claim.

This seems to be part of a wider, and long-running, malaise, in which policymakers can make a bold claim without any evidence to support it, yet said claim is taken at face value and isn’t challenged by the media nearly as often as it should be. Even worse (and a point made by Jonathan Portes in his recent discussion with Michael Gove), when challenged to provide evidence to support their views many in the media and political sphere tend to rely on a single statistic or anecdote even if copious evidence exists that contradicts their claim.

That’s assuming that the personalities concerned respond at all. Much of the time, they remain meekly silent, failing to respond, yet letting their original claim stand as though it hadn’t been challenged at all.

This isn’t just a point of pedantry – quite clearly, claims made by those covering and participating in campaign trails have real implications. For example, Vote Leave’s claim that Turkey would join the EU (despite all evidence to the contrary) likely played on some voters’ desires to reduce immigration (according to Ashcroft immigration was a major concern for roughly one third of voters), despite the fact that immigration has continually been proven to benefit the UK and everyone in it.  Similar points can be levied against various claims that the current level of trade between the EU and the UK could easily be replaced by trade with Commonwealth countries (despite the fact that the well-proven gravity model of trade directly contradicts this). And it seems likely that the upcoming election will be rife with claims and counter-claims that are (un)supported with evidence to varying degrees.

In essence, it is at least plausible that false claims made by opinion formers were taken to be true by some members of the voting public who based their decisions accordingly, and might have voted differently had they been informed of the actual evidence.

Now, what can be done to ensure that voters (and the general public as a whole) have actual evidence available rather than simply the claims of journalists and politicians?

Well, for a start, the press regulators (IPSO and Impress), the Electoral Commission, and the likes of the Office for National Statistics need to take on a much more proactive role. They should not wait for complaints to be submitted to them by the general public, but should take it upon themselves to investigate and penalise those in the public eye that make misleading or unsupported claims, with those punishments being far more severe than those currently used (for example, newspapers cannot continue to be allowed to get away with publishing retractions in the bottom corner of some page in the middle of their publication).

Second, political programmes like Newsnight, Question Time, and the Daily Politics should do far more to challenge politicians and journalists to support any claims they might make with sufficient evidence (i.e. more than just a single anecdote or statistic).  In other words, any journalist or politician appearing on such shows must be able to demonstrate that their claims are valid. The presenters on such shows should spend far more effort researching the actual evidence as well as questioning their guests on the basis of any claims that they might make.

Third, the Parliamentary Standards Committee needs to realise that their role in holding MPs accountable extends to claims made by MPs that are not supported by any evidence. Such claims are in violation of the MPs’ Code of Conduct and should be treated as such, with the necessary punishments for these violations being far more than the usual slap on the wrist.

Finally, and a much more long-term remedy, the general public should be provided with far greater training in the use and abuse of statistics. This should start from an early age and not only train people in how to calculate various (simple) statistics, but also provide information concerning how to spot when a commenter is using misleading figures or is relying solely on anecdotes to try to substantiate their points.

Once these suggestions have been implemented, the ability of journalists and politicians to deliberately obfuscate and mislead would be markedly reduced. That can only be a good thing.

Art and Economics

Art and economics probably aren’t the most natural of bedfellows. In my latest attempt at pretending to be sophisticated, and as part of a summer trip gallivanting around Barcelona, I ended up visiting (among all the other wonderful culinary and cultural delights) the city’s Museum of Contemporary Arts (MACBA).

The ground floor of MACBA is taken up with an exhibit by Andrea Fraser, called “L’1%, c’est moi”, which tries to present information and musings about the art world, with particular focus on individuals that have obtained and developed large personal collections of art. Unsurprisingly, Fraser’s angle is one of how those individuals with their own art collections are, for want of a better word, dubious (both in terms of their ethics and in terms of how they have been able to afford to obtain their art collections).

Indeed, one of the more wide-reaching points that Fraser tries to make is that an increase in inequality (it is unclear is Fraser is referring to wealth or income inequality) has enabled those individuals to build their collections. To that end, one of the “artworks” included in the exhibit was a short report, presumably put together by Fraser herself, that purports to demonstrate that rising inequality has benefited art collectors. In other words, Fraser is claiming that increasing inequality has enabled art collectors to benefit from increases in the value of their art collections.

However, Fraser’s “analysis” is pitiful at best. For a start, it is widely acknowledged that (income) inequality now is at roughly the same level as it was about 200 years ago (see, for example, here), yet Fraser chooses to focus solely on the past 50 years to try to bolster her claim that inequality is exceptionally high. Fraser does not extend her analysis back far enough in time to enable the conclusions she makes to be supported by the evidence. In fact, this is borne out by the graph on page 3 of Fraser’s report (reproduced below) showing that income inequality has been pretty much constant 50 years – hardly a marked increase in inequality at all.

Inequality

Moreover, the graphs Fraser included in the MACBA exhibit indicate that her understanding of statistical analysis does not extend even as far as the well-known maxim that “correlation does not imply causation”. To be fair to Fraser, a few economic researchers also don’t understand this concept particularly well. Nonetheless, in using the graph shown below, Fraser tries to support her claim that increases in inequality are leading to increases in the value of art.

20160725_122408.jpg

She does not, it seems, realise that there are plenty of other alternative reasons for the observed relationship – for example, it could be that the increase in the value of art is itself causing, or that both an increase in the value of art and the share of income obtained by the top 0.01% is driven by a common third factor (such as, for example, the rate of return on other investments).

The potentially absurd inferences that can be obtained by relying just on correlations can be seen even better in the graph below. The black dashed lines show the growth in the number of prisons and museums in the US over time, while the solid red line shows the US prison population. If one were to rely on correlations to make inferences, one would draw the conclusion that one way to reduce the US’ prison population would be to decrease the number of museums in the US. This shows the sheer ridiculousness of drawing conclusions from simple correlations alone.

20160725_122742

Hence, it’s clear that Andrea Fraser really should have put a bit more thought/work into the “analyses” she included as part of this exhibition.

PS. As a bonus piece of artsy mumo-jumbo economics, here is a description of an artwork by Adrian Melis. Enjoy

20160725_130109.jpg

 

 

 

Why the difference between correlation and causation matters

A blog post by a member of the Economic Policy Institute (a US think-tank) has claimed that the decline in Trade Union membership is the cause of the increase in (a single measure) of inequality in the USA.

The blog post looks at how membership of Trade Unions and the share of income that goes to the top 10% change during the period 1917-2012, notices that they appear to be negatively correlated, and therefore concludes that decline in trade union membership is responsible for the increase in inequality.

First, it is important to note that inequality has actually decreased over time, rather than increased as the article claims (see, for example, here and here). Moreover, the blog’s use of a very specific measure of inequality, focusing solely on the income share of the top 10%. It does not take into account any other factor that determines the level of inequality within a country – for example, whether the majority of income in the top 10% is distributed evenly across that 10%, or is concentrated in the top 1% or even the top 0.1%.

Indeed, a more comprehensive measure of inequality (such as the Gini coefficient) takes into account the distribution of income across the entire spectrum rather than merely focusing on a subset of that distribution. When looking at such measures over time, it becomes apparent that inequality across the entire distribution of incomes has barely increased since the 1960s, despite the measure used in the blog post having increased since that time.

(And that is not withstanding other ways in which inequality might arise, such as via the distribution of wealth, access to healthcare, and/or access to education.)

Second, the author’s evidence to support their argument that a decline in trade union membership is responsible for the increase in inequality  consists solely of the fact that the measure of US inequality they choose is negatively correlated with US Trade Union membership.

There are plenty of examples of two series being correlated over time, despite there not being any way in which a causal relationship can exist between them. For example, Tyler Vigen presents such examples as Arcade revenues being correlated with number of Computer Science PhDs being awarded, and there being a strong correlation between Maine’s divorce rate and per capita consumption of margarine.

Moreover, the article’s “analysis” fails to account for the countless other factors that could have affected inequality over the course of the almost 100 years covered. For example, demographic changes, changes in the industries, technological developments, new infrastructure, changing societal attitudes individually and together are likely to have contributed to the changes in inequality. Indeed, the correlation between the US Trade Union membership and the chosen measure of inequality appears to be the large increase in TU membership and the large decrease in inequality between 1936 and 1945. And it’s not as though there were other things going on during that period of time at all!  Despite this, the article attributes the changes in inequality solely to changes in Trade Union membership.

Finally, the article does not even try to come up with a mechanism by which Trade Union membership can affect inequality beyond a vague description of how trade unions increase bargaining power. There are no doubt plenty of other things that are negatively correlated with the measure of inequality used in the report and that the report’s author presumably also thinks is just as likely a reason for changes in inequality as is Trade Union membership (US military strength might well be one, as could the number of black and white television sets in use).

I look forward to the Economic Policy Institute writing about those in due course.