Margins and monopoly

Originally posted on the Adam Smith Institute blog.

A while ago, a new working paper that purports to provide evidence supporting the idea that overall market power exhibited by firms in the US has increased over the past 30-40 years.  This was then picked up by a few blogs / media outlets.  In short, the paper claims that there has been an increase overall profit margins (or mark-ups) since 1980 and, therefore, this implies that overall competition between firms has decreased and that antitrust enforcement is not working.

The initial basis for the paper’s argument is that under “perfect competition”, firms are supposed to charge a price equal to their cost. In standard economic theory, “perfect competition” is a rather idealised scenario in which there are many buyers and sellers, each of which is a price-taker (i.e. no individual buyer or seller has any influence over the price of the product). Under these, and a few other conditions, in a “perfectly competitive” market, the price of producing that unit ends up being equal to the cost of producing that unit – i.e. each producer in such a market makes zero economic profit (the “economic” nature of the profit, as compared to accounting profit, is a crucial distinction).  The paper then compares this to the standard economic theory of monopolistic pricing, in which one firm is the sole producer of a product and therefore can charge a price above cost – i.e. the monopolist obtains a positive profit margin on its sales. The paper then claims that the fact that its data indicate an increase in margins over time mean that the US economy has (in the aggregate) moved away from the “perfectly competitive” scenario and towards the monopolistic scenario, thereby implying a reduction in the overall levels of competition in the US economy.

Unfortunately the paper fails to take into account a number of factors. For example, and at a rather basic level that the paper’s authors should really be getting right, the “costs” in the theoretical perfectly competitive market do not coincide with measures of costs that are calculated in companies’ accounts.  In particular, economic costs include an “opportunity cost” of using the resources for the next best option – i.e. economic costs include an additional element beyond the balance-sheet cost of using/purchasing the input that is not usually picked up in accounting measures of cost. Under the perfectly competitive model, therefore, although there is no difference between the price of a product and the economic cost of producing it, there is likely to be a difference between said price and an accounting measure of cost – in other words, even under the perfectly competitive model, individual firms are likely to make some positive accounting profits.

Despite this, the paper goes ahead and calculates margins (and makes inferences thereof) using accounting cost – the paper uses the accounts of publicly-traded firms in the US over the period 1950-2014.  In other words, the paper automatically fails to measure the true margin as is relevant for economic theory. Hence, any attempts to link an increase in the margins in this paper to the competitive landscape as suggested by economic theory is flawed.

Even if the observation that margins increased was valid (i.e. the margin was calculated appropriately including the economic cost of production), then that is still insufficient to support the paper’s claims that overall competition has decreased.  In essence, by making such a claim based on the path of margins, the paper is claiming that the entirety (or, at least, the vast majority) of the increase in margins was due to a decrease in competition, thereby ignoring any other factors that could have resulted in an increase of margins over time.  Although the paper looks at one other factor that could explain the change in margins (changes in the average size of firms), the paper ignores factors such as changes in the types and nature of industries over time (e.g. some industries might have much higher up-front costs and lower marginal costs, and if those types of industries grew over time, then that could explain the increase in average margin without any change in competition levels).

(On a more technical note, margins can also be related to price elasticities of demand via the “Lerner Condition” – this is a mathematical relationship stating that a firm’s margin is inversely proportional to its own price elasticity of demand. Obviously, different industries can have different demand elasticities regardless of the level of competition in each industry and, as such, margins can differ due to that reason as well. This is particularly relevant if, as seems likely, the composition of the economy (in terms of which industries are most prevalent) has changed over time.)

Worse still is that the paper’s result of margins increasing over time is likely to be affected by a “survivorship bias”.  Specifically, as the paper tracks firms over time, clearly the more successful firms (the ones that survive and earn higher profits) will remain in business, while the less successful firms (the ones that go bust due to making lower profits) will exit the market. Consider a stylised example: suppose at the start, an industry consists of six firms of equal size in terms of revenues, but five firms each make a margin of 30%, and the sixth firm makes a margin of 1%. At the start, the average margin would be about 24%. Now suppose that the owner of the firm obtaining a margin of just 1% decides that they can do better in another industry, so decides to shut down – this means that the average margin would increase to 30% despite there not being any real decrease in the level of competition in the industry (as the five remaining firms would still compete against each other).

Hence, over time, one would expect the sample of firms over which the margin has been calculated to contain mainly successful firms and to lose the less successful firms, thereby resulting in an increase in average margin over time. The paper does not seem to have tried to account for this. (Note, too, that firms going bust is a sign of healthy competition – a more efficient firm is able to outcompete a less efficient firm such that the less efficient one stops trading.)

Overall, therefore, although the paper claims that 1) there has been an increase in margins over time; and 2) this implies that industries in the US have become more monopolistic over time, those claims do not stand up to scrutiny. Indeed, the paper’s approach to demonstrating such claims is flawed at the most basic level.

Advertisements

Is Amazon’s takeover of Whole Foods anti-competitive? Probably not.

This originally appeared as a guest post on the Adam Smith Institute’s blog last week.

A few days ago, Amazon announced its plans to purchase the predominantly USA-based grocery retail chain Whole Foods for almost $14bn. Although both companies operate in many countries, the main competition issues (if any) are likely to arise in the US, were both companies have a non-negligible presence.

Indeed, this announcement has resulted in a number of people claiming that the proposed merger will be anti-competitive. Specifically, there are some claims that the merger would result in 1) bundling and foreclosure of rivals; and/or 2) predatory pricing. In short, the first theory of harm posits that Amazon would force customers that wanted to purchase its distribution (or other) services to also purchase from Whole Foods (or vice versa), while the second theory of harm suggests that the merged entity would price below cost in order to drive out rival grocery firms before increasing prices once those rivals exited.

Importantly, both of these theories of harm require that the merged entity have some form of “market power” (i.e. the ability to charge a price above the competitive level and to act independently of its rivals). Typically, this is most likely to occur when a firm has a share of sales in a particular market of over 40%. However, as a general point, these theories of harm gloss over the fact that Amazon and Whole Foods’ shares in grocery sales are tiny – less than 5% combined in the US. As such, it is difficult to see how the combined entity can have any market power.  Clearly, the merged entity would not satisfy this for sales of groceries at the moment of the merger.

Bundling

However, others might argue that Amazon does have a sufficiently high share of sales of “online retail” to be classed as dominant. As such, they argue that Amazon could “leverage” its power in that area to grocery retail by bundling some of its services with those of its groceries. However, as the merged entity will be active at the retail level of groceries, it is not obvious exactly what other services offered by Amazon could be bundled with them – for the bundling strategy to work, consumers would still have to want at least one of the items in the bundle, and could continue to purchase them separately from Amazon or elsewhere anyway. Hence, there does not appear to be a viable mechanism through which this bundling theory of harm could occur.

Predatory Pricing

Moreover, for the predatory pricing theory of harm to be valid, there must be strong evidence that 1) the merged entity would price its groceries below some measure of cost that represents the extra cost that would be incurred by supplying one extra unit of output (usually measured as average variable cost of long-run average incremental cost); and 2) it would have an incentive to do so.

The first condition is notoriously difficult to prove – one first has to decide which costs should be included / excluded in the measure (which really isn’t as easy as one would think – e.g. should advertising spend that applies to brand-related marketing, but isn’t specifically related to groceries, be included), as well as deciding the relevant time-frame over which costs are assessed.

The second condition requires proving that the merged entity would become dominant (and therefore be able to recoup the losses it had made in pricing below cost) in the future. This is where the theory of harm becomes incredibly speculative – it assumes that sufficient sales would switch to the merged entity from rival grocery firms that the merged entity would be dominant. In other words, it assumes that pricing below cost would be sufficient in and of itself to persuade consumers to switch (regardless of e.g. quality of service provided) and that rival grocery firms would not respond in any way to the merged entity’s actions. Clearly both of these assumptions are likely to be violated in practice and, as such, the predatory pricing theory of harm seems unlikely.

Summary

Given that the merged entity is unlikely to have the incentive or ability either to bundle its products together or recoup any losses made from pricing below costs, both of the theories of harm currently being bandied about are unlikely to be valid. As such, it is difficult to see how the cries that the proposed merger is anti-competitive are anything more than “a big firm is buying someone so they have to be stopped”. That should not be a basis on which a merger can be prevented.

How to assess the effectiveness of the government’s Prevent counter-extremism programme

In the aftermath of the Manchester bombing, many questions were raised regarding the effectiveness of the government’s counter-extremism programmes, with particular focus on the Prevent arm of that strategy. For example, the FT reported that Salman Abedi was referred to Prevent, but that report was not followed up (although note that the police state that they cannot find any record of Abedi’s referral to Prevent). More generally, the Prevent programme has been called “toxic” by the now mayor of Manchester Andy Burnham and the Home Affairs Select Committee.

However, there are a number of examples of the Prevent programme also having done a lot of good. For example, two teenagers were stopped from travelling to Syria after being referred to Prevent by their parents in 2015, and it is credited with helping to stop 150 people (including 50 children) from going to fight in Syria in 2016.

Importantly, much of the discussion regarding Prevent’s effectiveness has focused on anecdotal evidence, the odd stylised fact here and there, a couple of case stories. Most criticisms or praise of Prevent focus on a few examples where it has not worked, been implemented badly, or succeeded. There are even calls to expand or to shut down Prevent without any evidence of whether or not it is actually an effective programme.

Indeed, as far as I’m aware, there has been no (publicly available) rigorous or systematic assessment of Prevent’s effectiveness. (Note that although the recently-launched book “De-Radicalisation in the UK Prevent Strategy: Security, Identity and Religion” by M. S. Elshimi claims to constitute such an assessment, its results are based on an absurdly small sample of only 27 people and therefore cannot be considered a systematic analysis.) However, conducting a systematic assessment could be a relatively simple procedure.

In particular, if data are available on the number of extremist / terror convictions and/or number of people successfully and unsuccessfully “treated” by Prevent at the level of individual local authorities, then it would be possible to use variations across those local authorities to assess Prevent’s effectiveness.

Put simply, the “outcome” variable (i.e. metric that assesses Prevent’s success) could be the number of extremist / terror convictions or the proportion of people referred to Prevent that are successfully treated. (Obviously if the number of convictions is used, it would be important to allocate those convictions to the local authority in which the extremist grew up and/or resided rather than where the extremist activity was carried out.) Of course, there would also be technical considerations regarding whether the outcome variable is a “count” variable, is bounded due to being expressed in percentage terms etc., but those can be dealt with relatively easily.

The explanatory “variable of interest” that would then measure the actual effectiveness of spending on the Prevent strategy would be each local authority’s annual budget for Prevent. If Prevent was effective, one would expect this variable to be negatively related to the number of convictions (since a successful Prevent would stop people before they committed a crime) and positively related to the proportion of successfully treated people. Alternative variables of interest could include the number of Prevent-dedicated personnel in each local authority or the amount of Prevent training that is provided to practitioners – each of these could be investigated to try to identify the most effective/important aspect of the Prevent strategy.

Note that it is unlikely that there would be any simultaneity between the Prevent budget (or other variable of interest)  and the outcome variable – although it is plausible that current Prevent spending would be based on past extremist activity in a local area (i.e. local areas with higher extremist activity get more money for Prevent), it is unlikely to be the case that current Prevent spending reacts quickly enough to be affected by current extremist activity. Nonetheless, this could be investigated by using lags of the Prevent budget variable as instruments or as the variables of interest themselves (since it could well be the case that Prevent takes time to have an impact).

As Prevent has been running since 2003, and there are roughly 400 local authorities in the UK, that should give a sizeable panel of data on which to conduct some relatively simple regression analyses. Of course, a number of other factors would need to be taken into account – for example, the population of each local authority, the average income within it, any changes to Prevent guidelines and/or the introduction or suspension of other counter-extremism strategies. The “identification” of the impact of Prevent would therefore come through variation in Prevent spending (or other Prevent-related variables of interest) and outcomes across local authorities and across time.

Now, I don’t have the data to conduct this analysis. However, I suspect that the data are out there – I understand some organisations have created databases containing details of all extremist-related convictions over a reasonably lengthy period of time (for example, The Henry Jackson Society has a dataset on all Islamist-related convictions from 1998-2015, but this would also need to be supplemented with data on other forms of extremism covered by Prevent).  Moreover, local authorities / the Home Office / the relevant government authority no doubt have records of the amounts that were spent on Prevent by local authorities (as well as the number of Prevent-related personnel etc.) on an annual basis . As such, combining the two together (along with the various controls) should provide a useable dataset fairly easily.

Hence, if the government and/or organisations with an interest in Prevent really do want to assess how effective is the Prevent strategy, then it actually isn’t very difficult to do so.

Fact-checking a few claims about the NHS

What with the campaigning for the general election having gotten into full swing last week, many claims have been made regarding which Party would be better for which aspect of security, the economy, education etc. One particular video regarding the NHS started doing the rounds on Facebook a few days ago. This video makes a number of claims regarding the supposed impact that the recent Coalition and Conservative governments have had on the NHS, with the video then going on to suggest that a Conservative government would be bad for the NHS. For a bit of excitement, here is said video:

 

 

The claims made in that video are many. Some are valid, whereas others are not. Let’s take each of them in turn.

Claim 1: We are experiencing the largest sustained drop in NHS funding as a percentage of GDP since the NHS was founded.

Reality: This claim is false. As per the information shown in the graph below (from the Institute for Fiscal Studies) NHS spending as a proportion of GDP has been stable over the past couple of years, and the decrease between 2009 and 2012 was no larger or longer than decreases in the mid-to-late 1970s or mid-1990s.

bn201_fig1

Moreover, the more relevant metric of NHS spending per capita continues to increase – in other words, more is spent per person on the NHS than ever before, although the rate of that increase has slowed in recent years.

bn201_fig2

Claim 2: If the internal market was abolished we [i.e. the NHS] could save billions.

Reality: This claim is also false. The internal market actually creates savings and is not “wasteful” as is claimed in the video. On the contrary, it promotes competition and stimulates the NHS to provide better services – importantly, the benefits of competition in healthcare are well established. Furthermore, it is actually the refusal of many within the NHS to accept the proven benefits of competition that is causing some harm to the NHS – indeed one of NHS Improvement’s main aims is to promote and encourage “buy-in” of competition among those in the NHS. Hence, abolishing the internal market would actually cost billions rather than save them.

Claim 3: Health tourism costs the NHS £200 million per year, which is insignificant in terms of the overall cost of the NHS.

Reality: This is generally true – although the costs to the NHS associated with people who are not ordinarily resident in the UK are of the order of £2 billion per year, that includes many people who did not come to the UK specifically and solely to use the NHS (i.e. it includes people who are not “health tourists”. Instead, estimates put the upper bound of the costs associated with those who travel to the UK for the sole purpose of using the NHS at around £300 million per year. When compared to the total annual NHS budget of about £90 billion, the costs associated with health tourism are indeed a trivial amount.

Claim 4: Immigrants are not ruining the NHS, they’re running the NHS.

Reality: True. Immigrants from within the EU currently represent about 10% of doctors and 4% of nurses. If non-EU immigrants are included, therefore, the figures are likely to be slightly (although probably not a huge amount) higher. Given that there are already quite severe labour shortages within the NHS, it is clear that without the immigrants currently working within the NHS, the functioning of the NHS would be severely hampered. Moreover, immigrants are net contributors in terms of taxes vs benefits, so also contribute to the NHS in that way. Hence, the claim that immigrants are not ruining the NHS is clearly valid.

Claim 5: 1 in 10 nursing posts are vacant and the nursing bursary has been scrapped

Reality: True. The nursing bursary was indeed scrapped at the start of the year – this means that there is a much-reduced incentive for people to train to become nurses as they will now have to pay £9,000 in tuition fees per year in order to do so. This is likely to lead to problems recruiting sufficient nurses in future. Notwithstanding that, there are also problems recruiting nurses now – the Royal College of Nursing suggests that 1 in 9 nursing posts are now vacant. This figure is actually marginally worse than that claimed (11% vacancy rate vs the 10% claimed).

Claim 6: Tens of thousands of sick patients waited on A&E trolleys this past winter

Reality: Likely to be true. Using data from Quality Watch (and a bit of approximation / extrapolation), roughly 6 million people attended A&E last winter. Of these, around 15% were not seen within the government target of four hours – i.e. about 900,000 people waited more than four hours in A&E. Now, it seems unlikely that all of these people waited on trolleys specifically, but even if only 10% of these people (i.e. 1.5% of all admittances to A&E) did then the “tens of thousands” figure would be accurate. Hence, this claim seems plausible.

Conclusion: As with most of these election video type things, the video contains some claims that are true, some that are likely to be true, and some that are demonstrably false. Does this mean that the Conservatives are the worst Party for the NHS? Who knows?! That’s for you to decide and take into account (if you want to) when you vote. But at least when doing so, you’ll now have a more complete set of facts when you do.

 

Evidentiary standards are slipping

Over the past month, there have been a number of instances in which a politician or journalist has made a bold claim, and then ignored or been unable to provide any evidence to support those claims.

For example, Fraser Nelson claimed that being in the EU had been a net detriment to the UK’s trade, and that the evidence he had seen supports that view. However, when provided with evidence that contradicted his claim, and when challenged to provide the evidence to which he referred, Nelson did not provide any sort of response. Likewise, Michael Gove claimed that there was evidence to indicate that leaving the EU would provide the UK with a “net dividend”. However, when pressed to provide the evidence that he claimed existed, Gove did not do so; nor did he respond to the provision of evidence that contradicted his view.

This is not just a problem for right-leaning opinion makers either; it affects left-leaning ones just as much. For example, despite copious evidence (from the Low Pay Commission) that increasing the minimum wage too high would be detrimental to the employment rate of low-income earners, Jeremy Corbyn claimed that increasing the minimum wage to £10 per hour would raise their living standards.  Again, Corbyn provided no evidence to support his claim.

This seems to be part of a wider, and long-running, malaise, in which policymakers can make a bold claim without any evidence to support it, yet said claim is taken at face value and isn’t challenged by the media nearly as often as it should be. Even worse (and a point made by Jonathan Portes in his recent discussion with Michael Gove), when challenged to provide evidence to support their views many in the media and political sphere tend to rely on a single statistic or anecdote even if copious evidence exists that contradicts their claim.

That’s assuming that the personalities concerned respond at all. Much of the time, they remain meekly silent, failing to respond, yet letting their original claim stand as though it hadn’t been challenged at all.

This isn’t just a point of pedantry – quite clearly, claims made by those covering and participating in campaign trails have real implications. For example, Vote Leave’s claim that Turkey would join the EU (despite all evidence to the contrary) likely played on some voters’ desires to reduce immigration (according to Ashcroft immigration was a major concern for roughly one third of voters), despite the fact that immigration has continually been proven to benefit the UK and everyone in it.  Similar points can be levied against various claims that the current level of trade between the EU and the UK could easily be replaced by trade with Commonwealth countries (despite the fact that the well-proven gravity model of trade directly contradicts this). And it seems likely that the upcoming election will be rife with claims and counter-claims that are (un)supported with evidence to varying degrees.

In essence, it is at least plausible that false claims made by opinion formers were taken to be true by some members of the voting public who based their decisions accordingly, and might have voted differently had they been informed of the actual evidence.

Now, what can be done to ensure that voters (and the general public as a whole) have actual evidence available rather than simply the claims of journalists and politicians?

Well, for a start, the press regulators (IPSO and Impress), the Electoral Commission, and the likes of the Office for National Statistics need to take on a much more proactive role. They should not wait for complaints to be submitted to them by the general public, but should take it upon themselves to investigate and penalise those in the public eye that make misleading or unsupported claims, with those punishments being far more severe than those currently used (for example, newspapers cannot continue to be allowed to get away with publishing retractions in the bottom corner of some page in the middle of their publication).

Second, political programmes like Newsnight, Question Time, and the Daily Politics should do far more to challenge politicians and journalists to support any claims they might make with sufficient evidence (i.e. more than just a single anecdote or statistic).  In other words, any journalist or politician appearing on such shows must be able to demonstrate that their claims are valid. The presenters on such shows should spend far more effort researching the actual evidence as well as questioning their guests on the basis of any claims that they might make.

Third, the Parliamentary Standards Committee needs to realise that their role in holding MPs accountable extends to claims made by MPs that are not supported by any evidence. Such claims are in violation of the MPs’ Code of Conduct and should be treated as such, with the necessary punishments for these violations being far more than the usual slap on the wrist.

Finally, and a much more long-term remedy, the general public should be provided with far greater training in the use and abuse of statistics. This should start from an early age and not only train people in how to calculate various (simple) statistics, but also provide information concerning how to spot when a commenter is using misleading figures or is relying solely on anecdotes to try to substantiate their points.

Once these suggestions have been implemented, the ability of journalists and politicians to deliberately obfuscate and mislead would be markedly reduced. That can only be a good thing.

What a surprise! Taking in refugees isn’t detrimental to society

Back in 2015, Germany, recognising the humanitarian crisis in Syria agreed to allow any refugee that had made it to another EU country to file claim asylum in Germany. Inevitably, this resulted in a large influx of refugees – roughly one million were registered at German borders in 2015, with a further 400,000 or so registering in 2016.

Equally inevitably, this was met with howls of protests from those who wanted to “protect their borders”. For example, claims regarding the level of crimes committed by refugees and immigrants littered the likes of the Daily Mail and the Express, despite the fact that said crimes accounted for an infinitesimal proportion of all crimes in Germany during that period.

However, until now there hasn’t been a systematic study of the impact of Germany’s decision to accept large numbers of refugees – a paper by Gehrsitz and Ungerer fills this gap.This paper looks at the impact of the number of refugees on local crime rates; domestic and refugee success in the job market; and domestic attitudes to refugees and immigration.Although there have been some studies that find that immigration / refugees are detrimental, but those studies are either methodologically flawed (e.g. the one by Piopiunik & Ruhose) or written by biased fools such as Borjas.

Given the fact that large numbers of refugees were accepted into Germany, if accepting refugees was detrimental to any of these areas, then those effects would be almost certain to show up in this analysis. However, the study indicates that accepting large numbers of refugees is not detrimental to the local population.

In order to do so, the study makes use of the fact that refugees were allocated to different German states simply based on what accommodation spaces were available, creating a pseudo-random distribution of the number of refugees across the different German states.In essence, provided that these allocations were not correlated with factors such as the initial (and trend) income, unemployment etc across the different states, this provides a natural experiment by which the impact of the number of refugees on the domestic population can be estimated. Importantly, the paper finds that there is no correlation between a state’s initial labour market conditions, demographics, crime rates etc. such that the inferences resulting from the analysis are highly likely to be valid.

The paper then looks at the impact of the number of refugees that entered a particular state during the 2015-2016 period on a state’s 1) change in crime between 2013 and 2015; 2) change in unemployment rate between 2013Q1 and 2016Q1; and 3) change in share of the vote obtained by the anti-immigration “Alternative fur Deutschland” party between the federal election in 2013 and the state elections in 2016, while also controlling for other factors (such as state GDP per capita, demographics etc.) that vary across the German states.

Perhaps unsurprisingly, the paper finds that refugee inflows have:

  • no negative impact on the rate of domestic unemployment in a state (in fact, the results suggest that an increase in refugees actually decreases domestic unemployment, but slightly increases unemployment among non-German workers likely because the refugees themselves start to show up in the unemployment figures);
  • a tiny impact on crime rates – a large increase in refugees does not lead to an “explosion” in crime, but merely increases reported crimes by only 1.5%, with the majority of this appearing to come from an increase in fare dodging on public transport;
  • no impact on support for the anti-immigration political party – in other words, having more refugees in an area does not seem to lead to people in those areas voting in favour of decreasing immigration.

Now, one potential issue with some studies that find “no effect” of a variable is that this finding of no effect is driven by the coefficients being estimated imprecisely – this is usually indicated by standard errors that are improbably large. However, in the case of this study, the standard errors do not appear to be overly large, such that there is no reason to believe that the findings of no effect are due to imprecise estimates of the coefficient.

Hence, there is strong reason to believe that accepting even a large number of refugees is not detrimental to the local population in terms of crime or unemployment (or other factors that might drive local people to vote for an anti-immigration political party). Although these results do only refer to short-term effects (i.e. those occurring within 6-12 months of a large influx of refugees), there is no reason to believe that the long-term effect would be any different. Indeed, many studies (e.g. Foged & Peri, the IMF) find that the domestic population actually benefits from taking in refugees and immigrants in the long-run.

In other words, arguments that taking in refugees will harm (or be at the expense of) the domestic population are highly likely to be false.

Immigration benefits us all – now the IMF gets in on the act

Only a short time after the Foged & Peri paper (summarised here) found that an “influx” of immigrants to Denmark benefited the both high-skilled and low-skilled workers in the local population, the IMF has examined whether or not those results apply to other advanced economies.

And, guess what? They do! Unsurprisingly.

In order to do so, the study uses a fairly nifty approach to accounting for potential reverse causation between migration and GDP per capita (since migrants might prefer moving to countries with higher GDP per capita in the first place). The study uses a “gravity model” to instrument for the share of migrants in a country, including various “push” factors (such as growth in the origin country, demographic variables etc.) and other controls, proving once again that describing something as “gobbledygook” just because you don’t understand it isn’t a particularly sensible thing to do.

The paper’s main findings are threefold. First, a 1% point increase in the proportion of population made up by migrants actually increases GDP per capita by 2%. Interestingly, this benefit arises via an increase in labour productivity, rather than an increase in the proportion of the population that is of working age.

For example, high-skilled immigrants can increase productivity through innovation and positive spillovers on native wages, while low-skilled workers can increase productivity by enabling native workers to re-train and move into more complex occupations (exactly as was found by Foged & Peri). An alternative mechanism cited by the IMF study suggests that the presence of low-skilled female immigrants increases the provision of household and child-care service, thereby increasing the labour supply of high-skilled native women. This result is robust to controlling for technology, trade openness, demographics, and country development.

Second, these benefits arise from both low-skilled and high-skilled migrants. As above, both skill-types affect GDP per capita through increasing labour productivity, rather than via increasing the proportion of the population that is of working age. However, the effect does appear to be more statistically significant for migration by low-skilled workers than it is for high-skilled migrants.

The study suggests that this difference could reflect differences in the impact of high-skilled migrants across different countries, but this seems unlikely to be sufficient to render the impact insignificant. More likely is the second reason posited by the study – namely, that high-skilled migrants initially might have to obtain jobs for which they are over-qualified, thereby meaning that their impact on the incentives of high-skilled native workers to retrain etc. is limited at first.

Third, the benefits to native workers arise across the entire income distribution. Both low-skilled and high-skilled immigration increase the GDP per capita of those in the bottom 90% of the income distribution by roughly the same amount, while high-skilled immigration increases the GDP per capita of those in the top 10% of the income distribution by roughly twice as much as does low-skilled immigration.

However, the study does not really examine the distribution within the bottom 90% particularly closely – the study just looks at the estimated impact of immigration on the Gini coefficient to conclude that the distribution within the bottom 90% would not be changed significantly. The study should, instead, have looked at, say, the impact of immigration on each decile or quintile of the income distribution separately so as to give a more complete picture of the impact of immigration across the income distribution.

The paper (and particularly the blog post linked to above) ends by getting somewhat more political. In particular the study suggests that there is a need for improvement in terms of providing support for native workers that want to re-train, find a new job etc. However, these policy suggestions are made without taking into account the fact that some countries do already have plentiful such schemes in place, to the extent that increasing the provision of such schemes in those countries might not be efficient. Of course, that’s not to say that some countries would benefit from increasing the provision of such schemes.