How to assess the effectiveness of the government’s Prevent counter-extremism programme

In the aftermath of the Manchester bombing, many questions were raised regarding the effectiveness of the government’s counter-extremism programmes, with particular focus on the Prevent arm of that strategy. For example, the FT reported that Salman Abedi was referred to Prevent, but that report was not followed up (although note that the police state that they cannot find any record of Abedi’s referral to Prevent). More generally, the Prevent programme has been called “toxic” by the now mayor of Manchester Andy Burnham and the Home Affairs Select Committee.

However, there are a number of examples of the Prevent programme also having done a lot of good. For example, two teenagers were stopped from travelling to Syria after being referred to Prevent by their parents in 2015, and it is credited with helping to stop 150 people (including 50 children) from going to fight in Syria in 2016.

Importantly, much of the discussion regarding Prevent’s effectiveness has focused on anecdotal evidence, the odd stylised fact here and there, a couple of case stories. Most criticisms or praise of Prevent focus on a few examples where it has not worked, been implemented badly, or succeeded. There are even calls to expand or to shut down Prevent without any evidence of whether or not it is actually an effective programme.

Indeed, as far as I’m aware, there has been no (publicly available) rigorous or systematic assessment of Prevent’s effectiveness. (Note that although the recently-launched book “De-Radicalisation in the UK Prevent Strategy: Security, Identity and Religion” by M. S. Elshimi claims to constitute such an assessment, its results are based on an absurdly small sample of only 27 people and therefore cannot be considered a systematic analysis.) However, conducting a systematic assessment could be a relatively simple procedure.

In particular, if data are available on the number of extremist / terror convictions and/or number of people successfully and unsuccessfully “treated” by Prevent at the level of individual local authorities, then it would be possible to use variations across those local authorities to assess Prevent’s effectiveness.

Put simply, the “outcome” variable (i.e. metric that assesses Prevent’s success) could be the number of extremist / terror convictions or the proportion of people referred to Prevent that are successfully treated. (Obviously if the number of convictions is used, it would be important to allocate those convictions to the local authority in which the extremist grew up and/or resided rather than where the extremist activity was carried out.) Of course, there would also be technical considerations regarding whether the outcome variable is a “count” variable, is bounded due to being expressed in percentage terms etc., but those can be dealt with relatively easily.

The explanatory “variable of interest” that would then measure the actual effectiveness of spending on the Prevent strategy would be each local authority’s annual budget for Prevent. If Prevent was effective, one would expect this variable to be negatively related to the number of convictions (since a successful Prevent would stop people before they committed a crime) and positively related to the proportion of successfully treated people. Alternative variables of interest could include the number of Prevent-dedicated personnel in each local authority or the amount of Prevent training that is provided to practitioners – each of these could be investigated to try to identify the most effective/important aspect of the Prevent strategy.

Note that it is unlikely that there would be any simultaneity between the Prevent budget (or other variable of interest)  and the outcome variable – although it is plausible that current Prevent spending would be based on past extremist activity in a local area (i.e. local areas with higher extremist activity get more money for Prevent), it is unlikely to be the case that current Prevent spending reacts quickly enough to be affected by current extremist activity. Nonetheless, this could be investigated by using lags of the Prevent budget variable as instruments or as the variables of interest themselves (since it could well be the case that Prevent takes time to have an impact).

As Prevent has been running since 2003, and there are roughly 400 local authorities in the UK, that should give a sizeable panel of data on which to conduct some relatively simple regression analyses. Of course, a number of other factors would need to be taken into account – for example, the population of each local authority, the average income within it, any changes to Prevent guidelines and/or the introduction or suspension of other counter-extremism strategies. The “identification” of the impact of Prevent would therefore come through variation in Prevent spending (or other Prevent-related variables of interest) and outcomes across local authorities and across time.

Now, I don’t have the data to conduct this analysis. However, I suspect that the data are out there – I understand some organisations have created databases containing details of all extremist-related convictions over a reasonably lengthy period of time (for example, The Henry Jackson Society has a dataset on all Islamist-related convictions from 1998-2015, but this would also need to be supplemented with data on other forms of extremism covered by Prevent).  Moreover, local authorities / the Home Office / the relevant government authority no doubt have records of the amounts that were spent on Prevent by local authorities (as well as the number of Prevent-related personnel etc.) on an annual basis . As such, combining the two together (along with the various controls) should provide a useable dataset fairly easily.

Hence, if the government and/or organisations with an interest in Prevent really do want to assess how effective is the Prevent strategy, then it actually isn’t very difficult to do so.

Fact-checking a few claims about the NHS

What with the campaigning for the general election having gotten into full swing last week, many claims have been made regarding which Party would be better for which aspect of security, the economy, education etc. One particular video regarding the NHS started doing the rounds on Facebook a few days ago. This video makes a number of claims regarding the supposed impact that the recent Coalition and Conservative governments have had on the NHS, with the video then going on to suggest that a Conservative government would be bad for the NHS. For a bit of excitement, here is said video:

 

 

The claims made in that video are many. Some are valid, whereas others are not. Let’s take each of them in turn.

Claim 1: We are experiencing the largest sustained drop in NHS funding as a percentage of GDP since the NHS was founded.

Reality: This claim is false. As per the information shown in the graph below (from the Institute for Fiscal Studies) NHS spending as a proportion of GDP has been stable over the past couple of years, and the decrease between 2009 and 2012 was no larger or longer than decreases in the mid-to-late 1970s or mid-1990s.

bn201_fig1

Moreover, the more relevant metric of NHS spending per capita continues to increase – in other words, more is spent per person on the NHS than ever before, although the rate of that increase has slowed in recent years.

bn201_fig2

Claim 2: If the internal market was abolished we [i.e. the NHS] could save billions.

Reality: This claim is also false. The internal market actually creates savings and is not “wasteful” as is claimed in the video. On the contrary, it promotes competition and stimulates the NHS to provide better services – importantly, the benefits of competition in healthcare are well established. Furthermore, it is actually the refusal of many within the NHS to accept the proven benefits of competition that is causing some harm to the NHS – indeed one of NHS Improvement’s main aims is to promote and encourage “buy-in” of competition among those in the NHS. Hence, abolishing the internal market would actually cost billions rather than save them.

Claim 3: Health tourism costs the NHS £200 million per year, which is insignificant in terms of the overall cost of the NHS.

Reality: This is generally true – although the costs to the NHS associated with people who are not ordinarily resident in the UK are of the order of £2 billion per year, that includes many people who did not come to the UK specifically and solely to use the NHS (i.e. it includes people who are not “health tourists”. Instead, estimates put the upper bound of the costs associated with those who travel to the UK for the sole purpose of using the NHS at around £300 million per year. When compared to the total annual NHS budget of about £90 billion, the costs associated with health tourism are indeed a trivial amount.

Claim 4: Immigrants are not ruining the NHS, they’re running the NHS.

Reality: True. Immigrants from within the EU currently represent about 10% of doctors and 4% of nurses. If non-EU immigrants are included, therefore, the figures are likely to be slightly (although probably not a huge amount) higher. Given that there are already quite severe labour shortages within the NHS, it is clear that without the immigrants currently working within the NHS, the functioning of the NHS would be severely hampered. Moreover, immigrants are net contributors in terms of taxes vs benefits, so also contribute to the NHS in that way. Hence, the claim that immigrants are not ruining the NHS is clearly valid.

Claim 5: 1 in 10 nursing posts are vacant and the nursing bursary has been scrapped

Reality: True. The nursing bursary was indeed scrapped at the start of the year – this means that there is a much-reduced incentive for people to train to become nurses as they will now have to pay £9,000 in tuition fees per year in order to do so. This is likely to lead to problems recruiting sufficient nurses in future. Notwithstanding that, there are also problems recruiting nurses now – the Royal College of Nursing suggests that 1 in 9 nursing posts are now vacant. This figure is actually marginally worse than that claimed (11% vacancy rate vs the 10% claimed).

Claim 6: Tens of thousands of sick patients waited on A&E trolleys this past winter

Reality: Likely to be true. Using data from Quality Watch (and a bit of approximation / extrapolation), roughly 6 million people attended A&E last winter. Of these, around 15% were not seen within the government target of four hours – i.e. about 900,000 people waited more than four hours in A&E. Now, it seems unlikely that all of these people waited on trolleys specifically, but even if only 10% of these people (i.e. 1.5% of all admittances to A&E) did then the “tens of thousands” figure would be accurate. Hence, this claim seems plausible.

Conclusion: As with most of these election video type things, the video contains some claims that are true, some that are likely to be true, and some that are demonstrably false. Does this mean that the Conservatives are the worst Party for the NHS? Who knows?! That’s for you to decide and take into account (if you want to) when you vote. But at least when doing so, you’ll now have a more complete set of facts when you do.

 

Evidentiary standards are slipping

Over the past month, there have been a number of instances in which a politician or journalist has made a bold claim, and then ignored or been unable to provide any evidence to support those claims.

For example, Fraser Nelson claimed that being in the EU had been a net detriment to the UK’s trade, and that the evidence he had seen supports that view. However, when provided with evidence that contradicted his claim, and when challenged to provide the evidence to which he referred, Nelson did not provide any sort of response. Likewise, Michael Gove claimed that there was evidence to indicate that leaving the EU would provide the UK with a “net dividend”. However, when pressed to provide the evidence that he claimed existed, Gove did not do so; nor did he respond to the provision of evidence that contradicted his view.

This is not just a problem for right-leaning opinion makers either; it affects left-leaning ones just as much. For example, despite copious evidence (from the Low Pay Commission) that increasing the minimum wage too high would be detrimental to the employment rate of low-income earners, Jeremy Corbyn claimed that increasing the minimum wage to £10 per hour would raise their living standards.  Again, Corbyn provided no evidence to support his claim.

This seems to be part of a wider, and long-running, malaise, in which policymakers can make a bold claim without any evidence to support it, yet said claim is taken at face value and isn’t challenged by the media nearly as often as it should be. Even worse (and a point made by Jonathan Portes in his recent discussion with Michael Gove), when challenged to provide evidence to support their views many in the media and political sphere tend to rely on a single statistic or anecdote even if copious evidence exists that contradicts their claim.

That’s assuming that the personalities concerned respond at all. Much of the time, they remain meekly silent, failing to respond, yet letting their original claim stand as though it hadn’t been challenged at all.

This isn’t just a point of pedantry – quite clearly, claims made by those covering and participating in campaign trails have real implications. For example, Vote Leave’s claim that Turkey would join the EU (despite all evidence to the contrary) likely played on some voters’ desires to reduce immigration (according to Ashcroft immigration was a major concern for roughly one third of voters), despite the fact that immigration has continually been proven to benefit the UK and everyone in it.  Similar points can be levied against various claims that the current level of trade between the EU and the UK could easily be replaced by trade with Commonwealth countries (despite the fact that the well-proven gravity model of trade directly contradicts this). And it seems likely that the upcoming election will be rife with claims and counter-claims that are (un)supported with evidence to varying degrees.

In essence, it is at least plausible that false claims made by opinion formers were taken to be true by some members of the voting public who based their decisions accordingly, and might have voted differently had they been informed of the actual evidence.

Now, what can be done to ensure that voters (and the general public as a whole) have actual evidence available rather than simply the claims of journalists and politicians?

Well, for a start, the press regulators (IPSO and Impress), the Electoral Commission, and the likes of the Office for National Statistics need to take on a much more proactive role. They should not wait for complaints to be submitted to them by the general public, but should take it upon themselves to investigate and penalise those in the public eye that make misleading or unsupported claims, with those punishments being far more severe than those currently used (for example, newspapers cannot continue to be allowed to get away with publishing retractions in the bottom corner of some page in the middle of their publication).

Second, political programmes like Newsnight, Question Time, and the Daily Politics should do far more to challenge politicians and journalists to support any claims they might make with sufficient evidence (i.e. more than just a single anecdote or statistic).  In other words, any journalist or politician appearing on such shows must be able to demonstrate that their claims are valid. The presenters on such shows should spend far more effort researching the actual evidence as well as questioning their guests on the basis of any claims that they might make.

Third, the Parliamentary Standards Committee needs to realise that their role in holding MPs accountable extends to claims made by MPs that are not supported by any evidence. Such claims are in violation of the MPs’ Code of Conduct and should be treated as such, with the necessary punishments for these violations being far more than the usual slap on the wrist.

Finally, and a much more long-term remedy, the general public should be provided with far greater training in the use and abuse of statistics. This should start from an early age and not only train people in how to calculate various (simple) statistics, but also provide information concerning how to spot when a commenter is using misleading figures or is relying solely on anecdotes to try to substantiate their points.

Once these suggestions have been implemented, the ability of journalists and politicians to deliberately obfuscate and mislead would be markedly reduced. That can only be a good thing.

What a surprise! Taking in refugees isn’t detrimental to society

Back in 2015, Germany, recognising the humanitarian crisis in Syria agreed to allow any refugee that had made it to another EU country to file claim asylum in Germany. Inevitably, this resulted in a large influx of refugees – roughly one million were registered at German borders in 2015, with a further 400,000 or so registering in 2016.

Equally inevitably, this was met with howls of protests from those who wanted to “protect their borders”. For example, claims regarding the level of crimes committed by refugees and immigrants littered the likes of the Daily Mail and the Express, despite the fact that said crimes accounted for an infinitesimal proportion of all crimes in Germany during that period.

However, until now there hasn’t been a systematic study of the impact of Germany’s decision to accept large numbers of refugees – a paper by Gehrsitz and Ungerer fills this gap.This paper looks at the impact of the number of refugees on local crime rates; domestic and refugee success in the job market; and domestic attitudes to refugees and immigration.Although there have been some studies that find that immigration / refugees are detrimental, but those studies are either methodologically flawed (e.g. the one by Piopiunik & Ruhose) or written by biased fools such as Borjas.

Given the fact that large numbers of refugees were accepted into Germany, if accepting refugees was detrimental to any of these areas, then those effects would be almost certain to show up in this analysis. However, the study indicates that accepting large numbers of refugees is not detrimental to the local population.

In order to do so, the study makes use of the fact that refugees were allocated to different German states simply based on what accommodation spaces were available, creating a pseudo-random distribution of the number of refugees across the different German states.In essence, provided that these allocations were not correlated with factors such as the initial (and trend) income, unemployment etc across the different states, this provides a natural experiment by which the impact of the number of refugees on the domestic population can be estimated. Importantly, the paper finds that there is no correlation between a state’s initial labour market conditions, demographics, crime rates etc. such that the inferences resulting from the analysis are highly likely to be valid.

The paper then looks at the impact of the number of refugees that entered a particular state during the 2015-2016 period on a state’s 1) change in crime between 2013 and 2015; 2) change in unemployment rate between 2013Q1 and 2016Q1; and 3) change in share of the vote obtained by the anti-immigration “Alternative fur Deutschland” party between the federal election in 2013 and the state elections in 2016, while also controlling for other factors (such as state GDP per capita, demographics etc.) that vary across the German states.

Perhaps unsurprisingly, the paper finds that refugee inflows have:

  • no negative impact on the rate of domestic unemployment in a state (in fact, the results suggest that an increase in refugees actually decreases domestic unemployment, but slightly increases unemployment among non-German workers likely because the refugees themselves start to show up in the unemployment figures);
  • a tiny impact on crime rates – a large increase in refugees does not lead to an “explosion” in crime, but merely increases reported crimes by only 1.5%, with the majority of this appearing to come from an increase in fare dodging on public transport;
  • no impact on support for the anti-immigration political party – in other words, having more refugees in an area does not seem to lead to people in those areas voting in favour of decreasing immigration.

Now, one potential issue with some studies that find “no effect” of a variable is that this finding of no effect is driven by the coefficients being estimated imprecisely – this is usually indicated by standard errors that are improbably large. However, in the case of this study, the standard errors do not appear to be overly large, such that there is no reason to believe that the findings of no effect are due to imprecise estimates of the coefficient.

Hence, there is strong reason to believe that accepting even a large number of refugees is not detrimental to the local population in terms of crime or unemployment (or other factors that might drive local people to vote for an anti-immigration political party). Although these results do only refer to short-term effects (i.e. those occurring within 6-12 months of a large influx of refugees), there is no reason to believe that the long-term effect would be any different. Indeed, many studies (e.g. Foged & Peri, the IMF) find that the domestic population actually benefits from taking in refugees and immigrants in the long-run.

In other words, arguments that taking in refugees will harm (or be at the expense of) the domestic population are highly likely to be false.

Grammar Schools: Sam Freedman really should know better

Over the past few days there as been quite a bit written about whether or not selective schools (i.e. allocating children to schools at age 11 based on ability) are beneficial or not, either in terms of social mobility, educational outcomes or other areas. This stems from rumours that Theresa May is reviewing current ban on Grammar Schools.

A number of commentators have claimed that re-introducing academic selection at 11 years old is a bad idea. For example, Sam Freedman, an executive director of Teach First and someone who really should know better has claimed that selective education is bad for social mobility; societal integration; accuracy of assessing ability; and/or promoting parental choice of school.

However each of Freedman’s supposed criticisms are not supported by the evidence.

First, there is strong evidence to support the idea that grammar schools actually improve social mobility and countries with selective systems tend to be no less integrated than those without. In making his claim that grammar schools harm social mobility and lead to decreased integration, Freedman cites this webpage. However, the results displayed on that webpage rely solely on correlations and does not try to control for any other factor that might account for the apparent relationship between deprivation and performance. For example, the difference in wages between grammar and comprehensive educated people could simply reflect the fact that grammar schools select those who are more likely to obtain a better wage anyway and enable them to reach their full potential, whereas those students would be held back if they were forced to attend a comprehensive. It also does nothing to account for different demographics beyond an entirely arbitrary and undefined measure of “deprivation”.

Indeed, the webpage cited by Freedman seems to view social mobility as being achieved by “preventing the gifted from reaching their full potential” rather than “allowing everyone to reach their maximum”. However, there is a substantial weight of evidence that indicates that selective schools not only enable the most-skilled to achieve their full potential, but also substantially improved outcomes for the less-skilled. For example, Dale & Krueger states that “students who attended more selective colleges earned about the same as students of seemingly comparable ability who attended less selective  schools. Children from low-income families, however, earned more if they attended selective colleges.”

Similarly, Galindo-Rueda & Vignoles finds that “the most able pupils in the selective school system did do somewhat better than those of similar ability in mixed ability school systems. Thus the grammar system was advantageous for the most able pupils in the system, i.e. highly able students who managed to get into grammar schools.

In other words, selective schools incontrovertibly enable the highly-skilled to achieve their full potential as well as benefiting children from low-income families. This result is also supported by a study commissioned by the Sutton Trust – despite their avidly anti-selective school bias leading them to try to weasel their way out of the positive grammar school effect, the study finds that grammar schools tend to increase student performance by roughly two grades per subject taken at GCSE.

Second, Freedman’s claim that the 11-plus is poor at assessing ability does not stand up to scrutiny. Freedman claims that 70,000 students are wrongly classified by the 11-plus test – it is not clear if Freedman means 70,000 over the entire span of grammar schools’ existence, or 70,000 “mistakes” every year. If the former, then the proportion of mistakes made is clearly tiny as millions of people would have taken the 11-plus since it was first used. If the latter, then assuming that all 700,000 11 year olds take the 11-plus (not an unreasonable assumption) that gives a “failure rate” of just 10%. Clearly this is not very large. And those that suggest that even a single failure is unacceptable when it comes to a child’s education are being completely impractical since no educational system exists that can completely eradicate failures.

Finally, Freedman claims that grammar schools are “anti-choice”. However, this is clearly false – there is an obvious mechanism by which grammar schools promote choice of school. Specifically, the presence of an 11-plus test gets parents thinking about what will happen after the test, encourages them to research different schools and think about what school(s) would be best for their child. In other words, the 11-plus exam incentivises parental involvement in school choice, thereby promoting it.

Hence, Freedman is incorrect on every single point he mentions about selective schools. From someone that high up in Teach First, that is simply unforgivable.

George Osborne: A solid, but not spectacular Chancellor

As announced last night, George Osborne is no longer Chancellor of the Exchequer. Plenty of articles have already been written regarding how he’ll be remembered and whatnot (see, for example, here), but what really matters in an evaluation of his performance as Chancellor is focusing on the long-term impact of his main policies.

Of course, the main focus of Osborne’s term as Chancellor was “austerity” (or, as it is described in technical terms, a “fiscal consolidation”). There is lots of debate as to whether austerity is harmful or is beneficial to growth in the short-run – for example, Alesina & Ardagna, and some parts of the IMF, find that fiscal consolidations actually increase short-term growth, whereas the likes of Guajardo et al. and other parts of the IMF believe that fiscal consolidations harm short-term growth.

However, what really matters in evaluating the impact of austerity is its likely affect on long-term growth. Here, none of the aforementioned studies have anything to say, but there are good reasons to believe that austerity is beneficial for long-term growth. For example, it seems plausible that the amount of time required for a country to re-establish any lost credibility (either with taxpayers or the central bank) that arises from running continually large fiscal deficits could be relatively high – convincing people that a country is now fiscally responsible is unlikely to be the matter of a few years’ work.

In other words, it is plausible that it could take longer than just a few years for people to change their opinion regarding a country’s fiscal responsibility, such that the full impact of fiscal consolidations are only likely to be felt far into the future. Moreover, even though a recent working paper (by Fata & Summers) suggest that fiscal consolidations hamper long-run growth, those papers are based on a methodology that is fundamentally flawed.) Hence, austerity per se could have been a good policy of Osborne’s.

However, Osborne erred when he cut government spending on investments and infrastructure. At a time of incredibly low interest rates, it would have made sense to borrow to invest in projects that would have reaped a return in the future – the costs of borrowing are low, while the expected future benefits of such investments are likely to be high (in terms of their impact on future growth and on future tax revenues). Therefore, Osborne’s focus on cutting all, rather than just day-to-day, spending was misguided. Just as misguided (for the same reasons, since it prevented Osborne from borrowing to invest in infrastructure) was his Fiscal Charter.

Similarly, protecting spending on the NHS and on international development meant that there was little incentive for those departments to find savings despite the fact that they, and the NHS in particular, is bloated and full of inefficiencies (witness the large NHS deficits). If those departments had not had their budgets protected, a more efficient and equitable distribution of the cuts to day-to-day spending could have been achieved (since if the NHS or development budgets had been cut slightly, then other departments’ budgets would not need decreasing as much). Likewise, the triple lock on pensions. So, another negative point for Osborne there.

On the other hand, Osborne did set up the Office of Budget Responsibility (OBR), which was undoubtedly a very good thing. Although not quite as dramatic as Labour granting the Bank of England (instrument) independence in 1997, this step was important since it enabled and promoted independent oversight of government forecasts and spending plans. Moreover, it added much-needed rigour to Treasury analysis, evaluation of government performance against fiscal targets etc. since those working in the Treasury know that people at the OBR will review and evaluate any plans and forecasts.

Getting on to some of the smaller issues, the pasty-tax debacle was also a negative point. Specifically, the introduction of the tax was actually a decent idea – it removed some of the myriad of exemptions that apply to VAT, thereby simplifying the tax system – but the subsequent reversal of the policy in the face of (relatively small) public backlash was weak and disappointing to see. Likewise, the introduction of the National Living Wage policy was a good idea, but restricting it to over 25s seems rather a cop-out, and instead the minimum wage should (and could easily) have been increased to the level of the NLW, thereby benefiting more people without substantially increasing businesses’ costs.

There are also things that Osborne couldn’t really do much about, but for which some might blame him anyway. The lack of productivity growth might be one, but that’s more the responsibility of other departments than it is the Treasury. Failing to meet, or continually adjusting, his fiscal targets could be another – but Osborne was hampered in meeting those because of sluggish growth in the global economy.

Overall, then, it seems as though there are plenty of things over which Osborne can be criticised (e.g. refusing to borrow to invest, protecting certain departments’ budgets), but equally there are plenty of policies he introduced that are worthy of praise (e.g. the OBR, consolidating day-to-day fiscal spending). As such, Osborne will most likely go down in history as fairly middle of the road – some good bits, some bad bits, but generally not outstanding in either category.

Glassdoor’s “contribution” to gender wage gap research

In a whirlwind of publicity and self-promotion, Glassdoor recently released the results of a “study” that claimed to prove the existence of a gender pay gap even when potential differences in areas such as “personal characteristics, job title, company, industry and other factors” are accounted for. As a result, Glassdoor boldly claims that men are paid about 5% more than women.

However, the approach used by Glassdoor is subject to a major problem. In particular, Glassdoor’s approach relies on them being able fully to control for all other factors (such as experience, qualifications etc) that might determine someone’s wage. Although Glassdoor notes this themselves (in a single paragraph relegated to the back of their report), they do not qualify any of their headline results with an acknowledgement of this.

Seeing as Glassdoor only includes controls for a few personal characteristics (such as age, qualifications, and experience) and some factors relating to a person’s occupation and industry (such as job title and company name). In other words, Glassdoor excludes a number of relevant factors that are likely to be relevant when it comes to explaining someone’s wage.

Indeed, other studies have found that factors such as ethnicity, whether or not someone is a member of a trade uniona person’s mental and physical health, and even language skills can be important determinants of a person’s wage. The Glassdoor study does not account for any of these, and thereby erroneously attributes differences in wages that could be due to these (or other factors) to the gender pay gap.

In addition, the Glassdoor study does not seem to account for whether or not someone is working part-time or full-time – as part-time workers are likely to be paid less than full-time workers, even on an hourly basis, Glassdoor’s apparent failure to include such a distinction in their analysis could bias their results substantially. Similarly, the Glassdoor study does not even try to account for potential unobservable differences (such as personal preferences regarding careers), and this failure further biases Glassdoor’s estimate of the gender pay gap.

Finally, the data used by Glassdoor are from self-reported salaries and characteristics that are recorded by members of the Glassdoor website. There are plenty of reasons to suspect that these data are unreliable – at the very least, it is widely recognised that figures that are self-reported are likely to be subject to considerable bias, such that relying on them for a study such as this is nonsensical.

Therefore, it is clear that Glassdoor’s “study” into the gender wage gap is merely an exercise in self-promotion rather than a useful contribution to the substantial amount of past research that has been conducted on this issue.