How to assess the effectiveness of the government’s Prevent counter-extremism programme

In the aftermath of the Manchester bombing, many questions were raised regarding the effectiveness of the government’s counter-extremism programmes, with particular focus on the Prevent arm of that strategy. For example, the FT reported that Salman Abedi was referred to Prevent, but that report was not followed up (although note that the police state that they cannot find any record of Abedi’s referral to Prevent). More generally, the Prevent programme has been called “toxic” by the now mayor of Manchester Andy Burnham and the Home Affairs Select Committee.

However, there are a number of examples of the Prevent programme also having done a lot of good. For example, two teenagers were stopped from travelling to Syria after being referred to Prevent by their parents in 2015, and it is credited with helping to stop 150 people (including 50 children) from going to fight in Syria in 2016.

Importantly, much of the discussion regarding Prevent’s effectiveness has focused on anecdotal evidence, the odd stylised fact here and there, a couple of case stories. Most criticisms or praise of Prevent focus on a few examples where it has not worked, been implemented badly, or succeeded. There are even calls to expand or to shut down Prevent without any evidence of whether or not it is actually an effective programme.

Indeed, as far as I’m aware, there has been no (publicly available) rigorous or systematic assessment of Prevent’s effectiveness. (Note that although the recently-launched book “De-Radicalisation in the UK Prevent Strategy: Security, Identity and Religion” by M. S. Elshimi claims to constitute such an assessment, its results are based on an absurdly small sample of only 27 people and therefore cannot be considered a systematic analysis.) However, conducting a systematic assessment could be a relatively simple procedure.

In particular, if data are available on the number of extremist / terror convictions and/or number of people successfully and unsuccessfully “treated” by Prevent at the level of individual local authorities, then it would be possible to use variations across those local authorities to assess Prevent’s effectiveness.

Put simply, the “outcome” variable (i.e. metric that assesses Prevent’s success) could be the number of extremist / terror convictions or the proportion of people referred to Prevent that are successfully treated. (Obviously if the number of convictions is used, it would be important to allocate those convictions to the local authority in which the extremist grew up and/or resided rather than where the extremist activity was carried out.) Of course, there would also be technical considerations regarding whether the outcome variable is a “count” variable, is bounded due to being expressed in percentage terms etc., but those can be dealt with relatively easily.

The explanatory “variable of interest” that would then measure the actual effectiveness of spending on the Prevent strategy would be each local authority’s annual budget for Prevent. If Prevent was effective, one would expect this variable to be negatively related to the number of convictions (since a successful Prevent would stop people before they committed a crime) and positively related to the proportion of successfully treated people. Alternative variables of interest could include the number of Prevent-dedicated personnel in each local authority or the amount of Prevent training that is provided to practitioners – each of these could be investigated to try to identify the most effective/important aspect of the Prevent strategy.

Note that it is unlikely that there would be any simultaneity between the Prevent budget (or other variable of interest)  and the outcome variable – although it is plausible that current Prevent spending would be based on past extremist activity in a local area (i.e. local areas with higher extremist activity get more money for Prevent), it is unlikely to be the case that current Prevent spending reacts quickly enough to be affected by current extremist activity. Nonetheless, this could be investigated by using lags of the Prevent budget variable as instruments or as the variables of interest themselves (since it could well be the case that Prevent takes time to have an impact).

As Prevent has been running since 2003, and there are roughly 400 local authorities in the UK, that should give a sizeable panel of data on which to conduct some relatively simple regression analyses. Of course, a number of other factors would need to be taken into account – for example, the population of each local authority, the average income within it, any changes to Prevent guidelines and/or the introduction or suspension of other counter-extremism strategies. The “identification” of the impact of Prevent would therefore come through variation in Prevent spending (or other Prevent-related variables of interest) and outcomes across local authorities and across time.

Now, I don’t have the data to conduct this analysis. However, I suspect that the data are out there – I understand some organisations have created databases containing details of all extremist-related convictions over a reasonably lengthy period of time (for example, The Henry Jackson Society has a dataset on all Islamist-related convictions from 1998-2015, but this would also need to be supplemented with data on other forms of extremism covered by Prevent).  Moreover, local authorities / the Home Office / the relevant government authority no doubt have records of the amounts that were spent on Prevent by local authorities (as well as the number of Prevent-related personnel etc.) on an annual basis . As such, combining the two together (along with the various controls) should provide a useable dataset fairly easily.

Hence, if the government and/or organisations with an interest in Prevent really do want to assess how effective is the Prevent strategy, then it actually isn’t very difficult to do so.

Advertisements

Evidentiary standards are slipping

Over the past month, there have been a number of instances in which a politician or journalist has made a bold claim, and then ignored or been unable to provide any evidence to support those claims.

For example, Fraser Nelson claimed that being in the EU had been a net detriment to the UK’s trade, and that the evidence he had seen supports that view. However, when provided with evidence that contradicted his claim, and when challenged to provide the evidence to which he referred, Nelson did not provide any sort of response. Likewise, Michael Gove claimed that there was evidence to indicate that leaving the EU would provide the UK with a “net dividend”. However, when pressed to provide the evidence that he claimed existed, Gove did not do so; nor did he respond to the provision of evidence that contradicted his view.

This is not just a problem for right-leaning opinion makers either; it affects left-leaning ones just as much. For example, despite copious evidence (from the Low Pay Commission) that increasing the minimum wage too high would be detrimental to the employment rate of low-income earners, Jeremy Corbyn claimed that increasing the minimum wage to £10 per hour would raise their living standards.  Again, Corbyn provided no evidence to support his claim.

This seems to be part of a wider, and long-running, malaise, in which policymakers can make a bold claim without any evidence to support it, yet said claim is taken at face value and isn’t challenged by the media nearly as often as it should be. Even worse (and a point made by Jonathan Portes in his recent discussion with Michael Gove), when challenged to provide evidence to support their views many in the media and political sphere tend to rely on a single statistic or anecdote even if copious evidence exists that contradicts their claim.

That’s assuming that the personalities concerned respond at all. Much of the time, they remain meekly silent, failing to respond, yet letting their original claim stand as though it hadn’t been challenged at all.

This isn’t just a point of pedantry – quite clearly, claims made by those covering and participating in campaign trails have real implications. For example, Vote Leave’s claim that Turkey would join the EU (despite all evidence to the contrary) likely played on some voters’ desires to reduce immigration (according to Ashcroft immigration was a major concern for roughly one third of voters), despite the fact that immigration has continually been proven to benefit the UK and everyone in it.  Similar points can be levied against various claims that the current level of trade between the EU and the UK could easily be replaced by trade with Commonwealth countries (despite the fact that the well-proven gravity model of trade directly contradicts this). And it seems likely that the upcoming election will be rife with claims and counter-claims that are (un)supported with evidence to varying degrees.

In essence, it is at least plausible that false claims made by opinion formers were taken to be true by some members of the voting public who based their decisions accordingly, and might have voted differently had they been informed of the actual evidence.

Now, what can be done to ensure that voters (and the general public as a whole) have actual evidence available rather than simply the claims of journalists and politicians?

Well, for a start, the press regulators (IPSO and Impress), the Electoral Commission, and the likes of the Office for National Statistics need to take on a much more proactive role. They should not wait for complaints to be submitted to them by the general public, but should take it upon themselves to investigate and penalise those in the public eye that make misleading or unsupported claims, with those punishments being far more severe than those currently used (for example, newspapers cannot continue to be allowed to get away with publishing retractions in the bottom corner of some page in the middle of their publication).

Second, political programmes like Newsnight, Question Time, and the Daily Politics should do far more to challenge politicians and journalists to support any claims they might make with sufficient evidence (i.e. more than just a single anecdote or statistic).  In other words, any journalist or politician appearing on such shows must be able to demonstrate that their claims are valid. The presenters on such shows should spend far more effort researching the actual evidence as well as questioning their guests on the basis of any claims that they might make.

Third, the Parliamentary Standards Committee needs to realise that their role in holding MPs accountable extends to claims made by MPs that are not supported by any evidence. Such claims are in violation of the MPs’ Code of Conduct and should be treated as such, with the necessary punishments for these violations being far more than the usual slap on the wrist.

Finally, and a much more long-term remedy, the general public should be provided with far greater training in the use and abuse of statistics. This should start from an early age and not only train people in how to calculate various (simple) statistics, but also provide information concerning how to spot when a commenter is using misleading figures or is relying solely on anecdotes to try to substantiate their points.

Once these suggestions have been implemented, the ability of journalists and politicians to deliberately obfuscate and mislead would be markedly reduced. That can only be a good thing.

Immigration benefits us all – now the IMF gets in on the act

Only a short time after the Foged & Peri paper (summarised here) found that an “influx” of immigrants to Denmark benefited the both high-skilled and low-skilled workers in the local population, the IMF has examined whether or not those results apply to other advanced economies.

And, guess what? They do! Unsurprisingly.

In order to do so, the study uses a fairly nifty approach to accounting for potential reverse causation between migration and GDP per capita (since migrants might prefer moving to countries with higher GDP per capita in the first place). The study uses a “gravity model” to instrument for the share of migrants in a country, including various “push” factors (such as growth in the origin country, demographic variables etc.) and other controls, proving once again that describing something as “gobbledygook” just because you don’t understand it isn’t a particularly sensible thing to do.

The paper’s main findings are threefold. First, a 1% point increase in the proportion of population made up by migrants actually increases GDP per capita by 2%. Interestingly, this benefit arises via an increase in labour productivity, rather than an increase in the proportion of the population that is of working age.

For example, high-skilled immigrants can increase productivity through innovation and positive spillovers on native wages, while low-skilled workers can increase productivity by enabling native workers to re-train and move into more complex occupations (exactly as was found by Foged & Peri). An alternative mechanism cited by the IMF study suggests that the presence of low-skilled female immigrants increases the provision of household and child-care service, thereby increasing the labour supply of high-skilled native women. This result is robust to controlling for technology, trade openness, demographics, and country development.

Second, these benefits arise from both low-skilled and high-skilled migrants. As above, both skill-types affect GDP per capita through increasing labour productivity, rather than via increasing the proportion of the population that is of working age. However, the effect does appear to be more statistically significant for migration by low-skilled workers than it is for high-skilled migrants.

The study suggests that this difference could reflect differences in the impact of high-skilled migrants across different countries, but this seems unlikely to be sufficient to render the impact insignificant. More likely is the second reason posited by the study – namely, that high-skilled migrants initially might have to obtain jobs for which they are over-qualified, thereby meaning that their impact on the incentives of high-skilled native workers to retrain etc. is limited at first.

Third, the benefits to native workers arise across the entire income distribution. Both low-skilled and high-skilled immigration increase the GDP per capita of those in the bottom 90% of the income distribution by roughly the same amount, while high-skilled immigration increases the GDP per capita of those in the top 10% of the income distribution by roughly twice as much as does low-skilled immigration.

However, the study does not really examine the distribution within the bottom 90% particularly closely – the study just looks at the estimated impact of immigration on the Gini coefficient to conclude that the distribution within the bottom 90% would not be changed significantly. The study should, instead, have looked at, say, the impact of immigration on each decile or quintile of the income distribution separately so as to give a more complete picture of the impact of immigration across the income distribution.

The paper (and particularly the blog post linked to above) ends by getting somewhat more political. In particular the study suggests that there is a need for improvement in terms of providing support for native workers that want to re-train, find a new job etc. However, these policy suggestions are made without taking into account the fact that some countries do already have plentiful such schemes in place, to the extent that increasing the provision of such schemes in those countries might not be efficient. Of course, that’s not to say that some countries would benefit from increasing the provision of such schemes.

Grammar Schools: Sam Freedman really should know better

Over the past few days there as been quite a bit written about whether or not selective schools (i.e. allocating children to schools at age 11 based on ability) are beneficial or not, either in terms of social mobility, educational outcomes or other areas. This stems from rumours that Theresa May is reviewing current ban on Grammar Schools.

A number of commentators have claimed that re-introducing academic selection at 11 years old is a bad idea. For example, Sam Freedman, an executive director of Teach First and someone who really should know better has claimed that selective education is bad for social mobility; societal integration; accuracy of assessing ability; and/or promoting parental choice of school.

However each of Freedman’s supposed criticisms are not supported by the evidence.

First, there is strong evidence to support the idea that grammar schools actually improve social mobility and countries with selective systems tend to be no less integrated than those without. In making his claim that grammar schools harm social mobility and lead to decreased integration, Freedman cites this webpage. However, the results displayed on that webpage rely solely on correlations and does not try to control for any other factor that might account for the apparent relationship between deprivation and performance. For example, the difference in wages between grammar and comprehensive educated people could simply reflect the fact that grammar schools select those who are more likely to obtain a better wage anyway and enable them to reach their full potential, whereas those students would be held back if they were forced to attend a comprehensive. It also does nothing to account for different demographics beyond an entirely arbitrary and undefined measure of “deprivation”.

Indeed, the webpage cited by Freedman seems to view social mobility as being achieved by “preventing the gifted from reaching their full potential” rather than “allowing everyone to reach their maximum”. However, there is a substantial weight of evidence that indicates that selective schools not only enable the most-skilled to achieve their full potential, but also substantially improved outcomes for the less-skilled. For example, Dale & Krueger states that “students who attended more selective colleges earned about the same as students of seemingly comparable ability who attended less selective  schools. Children from low-income families, however, earned more if they attended selective colleges.”

Similarly, Galindo-Rueda & Vignoles finds that “the most able pupils in the selective school system did do somewhat better than those of similar ability in mixed ability school systems. Thus the grammar system was advantageous for the most able pupils in the system, i.e. highly able students who managed to get into grammar schools.

In other words, selective schools incontrovertibly enable the highly-skilled to achieve their full potential as well as benefiting children from low-income families. This result is also supported by a study commissioned by the Sutton Trust – despite their avidly anti-selective school bias leading them to try to weasel their way out of the positive grammar school effect, the study finds that grammar schools tend to increase student performance by roughly two grades per subject taken at GCSE.

Second, Freedman’s claim that the 11-plus is poor at assessing ability does not stand up to scrutiny. Freedman claims that 70,000 students are wrongly classified by the 11-plus test – it is not clear if Freedman means 70,000 over the entire span of grammar schools’ existence, or 70,000 “mistakes” every year. If the former, then the proportion of mistakes made is clearly tiny as millions of people would have taken the 11-plus since it was first used. If the latter, then assuming that all 700,000 11 year olds take the 11-plus (not an unreasonable assumption) that gives a “failure rate” of just 10%. Clearly this is not very large. And those that suggest that even a single failure is unacceptable when it comes to a child’s education are being completely impractical since no educational system exists that can completely eradicate failures.

Finally, Freedman claims that grammar schools are “anti-choice”. However, this is clearly false – there is an obvious mechanism by which grammar schools promote choice of school. Specifically, the presence of an 11-plus test gets parents thinking about what will happen after the test, encourages them to research different schools and think about what school(s) would be best for their child. In other words, the 11-plus exam incentivises parental involvement in school choice, thereby promoting it.

Hence, Freedman is incorrect on every single point he mentions about selective schools. From someone that high up in Teach First, that is simply unforgivable.

George Osborne: A solid, but not spectacular Chancellor

As announced last night, George Osborne is no longer Chancellor of the Exchequer. Plenty of articles have already been written regarding how he’ll be remembered and whatnot (see, for example, here), but what really matters in an evaluation of his performance as Chancellor is focusing on the long-term impact of his main policies.

Of course, the main focus of Osborne’s term as Chancellor was “austerity” (or, as it is described in technical terms, a “fiscal consolidation”). There is lots of debate as to whether austerity is harmful or is beneficial to growth in the short-run – for example, Alesina & Ardagna, and some parts of the IMF, find that fiscal consolidations actually increase short-term growth, whereas the likes of Guajardo et al. and other parts of the IMF believe that fiscal consolidations harm short-term growth.

However, what really matters in evaluating the impact of austerity is its likely affect on long-term growth. Here, none of the aforementioned studies have anything to say, but there are good reasons to believe that austerity is beneficial for long-term growth. For example, it seems plausible that the amount of time required for a country to re-establish any lost credibility (either with taxpayers or the central bank) that arises from running continually large fiscal deficits could be relatively high – convincing people that a country is now fiscally responsible is unlikely to be the matter of a few years’ work.

In other words, it is plausible that it could take longer than just a few years for people to change their opinion regarding a country’s fiscal responsibility, such that the full impact of fiscal consolidations are only likely to be felt far into the future. Moreover, even though a recent working paper (by Fata & Summers) suggest that fiscal consolidations hamper long-run growth, those papers are based on a methodology that is fundamentally flawed.) Hence, austerity per se could have been a good policy of Osborne’s.

However, Osborne erred when he cut government spending on investments and infrastructure. At a time of incredibly low interest rates, it would have made sense to borrow to invest in projects that would have reaped a return in the future – the costs of borrowing are low, while the expected future benefits of such investments are likely to be high (in terms of their impact on future growth and on future tax revenues). Therefore, Osborne’s focus on cutting all, rather than just day-to-day, spending was misguided. Just as misguided (for the same reasons, since it prevented Osborne from borrowing to invest in infrastructure) was his Fiscal Charter.

Similarly, protecting spending on the NHS and on international development meant that there was little incentive for those departments to find savings despite the fact that they, and the NHS in particular, is bloated and full of inefficiencies (witness the large NHS deficits). If those departments had not had their budgets protected, a more efficient and equitable distribution of the cuts to day-to-day spending could have been achieved (since if the NHS or development budgets had been cut slightly, then other departments’ budgets would not need decreasing as much). Likewise, the triple lock on pensions. So, another negative point for Osborne there.

On the other hand, Osborne did set up the Office of Budget Responsibility (OBR), which was undoubtedly a very good thing. Although not quite as dramatic as Labour granting the Bank of England (instrument) independence in 1997, this step was important since it enabled and promoted independent oversight of government forecasts and spending plans. Moreover, it added much-needed rigour to Treasury analysis, evaluation of government performance against fiscal targets etc. since those working in the Treasury know that people at the OBR will review and evaluate any plans and forecasts.

Getting on to some of the smaller issues, the pasty-tax debacle was also a negative point. Specifically, the introduction of the tax was actually a decent idea – it removed some of the myriad of exemptions that apply to VAT, thereby simplifying the tax system – but the subsequent reversal of the policy in the face of (relatively small) public backlash was weak and disappointing to see. Likewise, the introduction of the National Living Wage policy was a good idea, but restricting it to over 25s seems rather a cop-out, and instead the minimum wage should (and could easily) have been increased to the level of the NLW, thereby benefiting more people without substantially increasing businesses’ costs.

There are also things that Osborne couldn’t really do much about, but for which some might blame him anyway. The lack of productivity growth might be one, but that’s more the responsibility of other departments than it is the Treasury. Failing to meet, or continually adjusting, his fiscal targets could be another – but Osborne was hampered in meeting those because of sluggish growth in the global economy.

Overall, then, it seems as though there are plenty of things over which Osborne can be criticised (e.g. refusing to borrow to invest, protecting certain departments’ budgets), but equally there are plenty of policies he introduced that are worthy of praise (e.g. the OBR, consolidating day-to-day fiscal spending). As such, Osborne will most likely go down in history as fairly middle of the road – some good bits, some bad bits, but generally not outstanding in either category.

The cost of Brexit (part 2 of who knows how many)

In response to the Treasury’s report on the costs of Brexit (and, obviously, to my blog post covering that report) a group calling themselves “Economists for Brexit” published a pamphlet which they claim contains a more reasonable estimate of the impact of Brexit on the UK economy.

Unsurprisingly, they find that, contrary to the Treasury’s report (and, indeed, the vast majority of economic reports published on this issue), that Brexit would benefit the UK economy by increasing GDP growth by about 0.5% points per year on average (with the majority of this increase coming in 2020, the final year of their forecast).

Equally unsurprisingly, their estimate is fundamentally flawed. In an impressive attempt to hide these flaws, the report contains only a two page summary of the model they have used to obtain their results, but even then the numerous flaws are apparent.

First, the report assumes that leaving the EU would mean that the UK would be able to remove EU-set trade barriers to non-EU countries, but would still keep the same terms-of-trade it currently has with EU countries. Moreover, it assumes that all trade barriers will reduce by half over the next five years. These assumptions drive the report’s “finding” that Brexit would increase UK living standards by 3.2% by 2020. However, the report does not provide any evidence to support the validity of either of these assumptions. Indeed, there is plenty of evidence to suggest that they are not valid – for example, they assume a rate of decrease in trade barriers not seen since the 1960s.

Second, the report assumes that the 0.8% GDP net saving from the UK not having to contribute to the EU budget would be passed-on entirely to taxpayers in the form of an income tax cut. This is extremely unlikely to happen – due to the current government’s austerity policies, any savings from Brexit are likely to be used to reduce the government deficit rather than hand out a (potentially politically damaging) tax cut.

Third, not only does the report assume that there would be a reduction in regulation if the UK were to leave the UK (which is an unproven assumption), the report then assumes that this reduction in regulation would have exactly the same effect as a 2% point decrease in the employer rate of NI. One hopes that those writing the point must have released how barmy such an assumption is – the report doesn’t contain even a passing attempt to justify how a decrease in regulation would have exactly the same impact as reduction in employer NI. Indeed, it is barely possible to conceive how anyone could think this was a reasonable assumption.

Anyway, moving on. Finally, the report assumes that the government deficit is unchanged due to the aforementioned assumptions resulting in the government’s revenues not changed. However, this fails to recognise the possibility that some of the money that was spent on EU goods and services previously could now be spent on UK goods and services, thereby potentially increasing tax receipts. Conversely, the report also assumes that non-UK people and businesses won’t decide to move away from the UK, which would result in a decrease in tax revenues.

And all of this is to say nothing of the fact that the report has excluded countless other factors that could be detrimental to the UK. For example, the report does not even mention the potential impact Brexit could have on immigration (note that the vast majority of studies find that immigration is beneficial for the country to which immigrants relocate and this is even true for low-skilled workers in that country). Nor does it cover the costs associated with the uncertainty that would be created and persist for a number of years regarding exactly what form of agreement between the UK and the EU would be put in place post-Brexit.

In essence, the study published by the “Economists for Brexit” group is so full of holes it is no surprise that they were only able to find eight professional economists to support it. Contrast this to the almost 200 economists (including yours truly) that are signatories to a letter in the Times stating that “[l]eaving would entail significant long-term costs.” That in itself should be damning enough.

(Not) Eating for two: Fasting and educational attainment

Recent news regarding the possibility of moving the dates on which core GCSE exams are held forward so as to avoid Ramadan has highlighted the impact that fasting and nutrition can have on exam results and educational attainment.

The impact of fasting / poor nutrition during exam periods and/or while growing up has been subject to numerous studies before – see, for example, Anderson et al. and  Glewwe & Miguel.

However, what has received less attention (at least until now) is the impact of a mother’s fasting during pregnancy on a child’s educational attainment. That is why a paper by Douglas Almond, Bhashkar Mazumder, and Reyn van Ewijk in the latest issue of The Economic Journal is rather interesting – it looks at precisely this issue.

Specifically, the paper looks at the impact of Ramadan falling during pregnancy (i.e. the impact of fasting during the gestation period) on the educational attainment of Pakistani and Bangladeshi children (i.e. those children most likely to be of Muslim decent and, hence, whose mothers are most likely to have fasted during Ramadan) at age seven compared to that of other groups of children and finds that there is a small, but significant, decrease in educational attainment of the Pakistani and Bangladeshi children. The authors use this result to suggest that “brief prenatal investments may be more cost effective than traditional educational intervention in improving academic performance.

On the whole, the paper is a good one, following a clearly set out method, and the results provide useful policy indications. However, that is not to say that the paper does not have some flaws.

First, the paper uses the educational attainment of children of Caribbean decent as the “control group” against which the educational attainment of Pakistani and Bangladeshi children is tested. The authors justify this on the grounds that Caribbean families are unlikely to fast during Ramadan, such that the control group is unaffected by Ramadan. Although this might seem reasonable from a statistical perspective (although I’d argue that it still introduces unacceptable biases into the analysis), from a policy perspective it is less desirable.

Specifically, the relevant control group to determine whether a policy would be worthwhile is “the average student” – the authors do not provide any evidence to indicate that Caribbean students represent the average attainment in the UK. Indeed, the paper actually suggests that Caribbean students’ educational attainment tends to be below the UK average. In other words, the effect that the paper finds is likely to be understated relative to the average student, such that the paper’s conclusions could be much stronger if the average educational attainment was used. (Although the paper conducts a “robustness check” using White British students as an alternative control group, this still fails to get at the effect compared to the average student.)

Second, the paper assumes a “standard” gestation period of xx days, but does not investigate the extent to which changing the length of this gestation period affects the results. It could be that a slight change in the length of the gestation period assumed by the authors would affect the effect they find, which is important given that the effect they find is relatively small (albeit statistically significant). Hence, the rigour of the paper could have been improved by including this sensitivity check.

Third, the authors fail to make sufficient inference from the results presented in the paper. In particular, the paper’s results indicate that the impact of fasting on educational attainment differs according to the stage during the gestation period at which the fasting occurred – the impact of gestational fasting on educational attainment is largest when it happens during the third and fourth month of gestation and is almost negligible when it occurs after the seventh month. In other words, the paper could have highlighted the importance of early nutritional interventions for policy during gestation, but failed to do so.

Nonetheless, despite these flaws, the paper is an interesting one, with some important policy implications.