The IMF and gender/income inequality

Last week the IMF published a paper that it claims shows a “strong association between gender inequality and income inequality“, along with an accompanying blog post and tweet. However, the study presented in the paper itself in no way supports the claims the IMF makes. Indeed, the paper itself is so shoddy and full of holes that it would shame even a first year undergrad had they submitted it as part of their coursework.

The paper published by the IMF uses data regarding income inequality and gender inequality (both measured as Gini coefficients) across a group of countries for the past two decades, controls for other factors that can affect income inequality, and finds that there is a statistically significant impact of gender inequality on income inequality. That is to say that the paper’s results suggest that reducing gender inequality will reduce income inequality.

However, this result is based on a methodology that is fundamentally flawed. In particular, although their main result is based on a regression analysis that does control for factors other than gender inequality that might affect income inequality, one of their main pieces of “evidence” is the supposed simple correlation between income and gender inequality. Indeed, they claim that the fact that this correlation is positive supports the idea that gender inequality is associated with income inequality see the first graph in the IMF’s blog post). However, the study fails to acknowledge that this correlation is, in fact, weak at best – the claimed positive relationship is barely above the horizontal.

Even worse is the fact that their claimed “strong association” between gender and income inequality is based on a regression analysis that finds that gender inequality is only statistically significant at the 10% level. In other words, there is a a 5%-10% chance that the results of this study are not due to any actual impact of gender inequality on income inequality, but are instead due to random chance. That can hardly be the basis for a claim of a “strong association” between the two, yet the IMF claims exactly that.

Furthermore, their treatment of the obvious simultaneity between income and gender inequality is not valid. Instead of using the accepted methods of two-stage least squares or GMM, the study instead just uses its instruments directly in the main regression and yet still claims that those results represent the impact of gender inequality on income inequality. This is patently false. In order to properly account for the simultaneity between income and gender inequality while still estimating the impact of gender inequality on income inequality, the study should include the instruments in a first-stage regression in which gender inequality is the dependent variable, before using the fitted values from that regression in the main second-stage regression in which income inequality is the dependent variable. The fact that the study has not done this, yet still claims that “[t]he estimation results are robust to concerns about the direction of causality” is, at best, disingenuous. (And, as an aside, the instruments the study uses are unlikely to be valid – it is easy to see how, despite the study’s claims, variables such as legal rights and labour force participation are likely to affect income inequality independent of their impact on gender inequality.)

Moreover, there are a number of substantial criticisms regarding the use of Gini coefficients to measure inequality, but these are too numerous to detail here. Suffice it to say, however, that the IMF’s study does not even attempt to ameliorate any of these problems.

It is baffling as to why the IMF thinks that a study so incomplete and open to justified criticism is worthy of publication. If this paper reflects the true standards the IMF require for publication, then this is a very dark day indeed.


The neglect of the “dismal science” in SF

Having attended (and thoroughly enjoyed) a couple of Worldcons a few years ago, one of the many things that came across was the amount of effort many authors expend in trying to ensure that the aspects of physics/chemistry/biology included in their stories are accurate or, at the very least, conceptually feasible. Notably, even those writers without a scientific background often make such efforts, usually by discussing their story ideas/concepts with those working in the science industries.

However, as a professional (self-interested?) economist, one seeming absence from these efforts to explain/rationalise the science underpinning authors’ writings is in terms of economics – quite often, a story includes a stark change in economic conditions compared to the present day, but does not include an explanation/description how such changes occurred. Although it might be something of an exaggeration, this is almost akin to including faster-than-light space travel absent an explanation of how such travel is achieved.

This appears to be something of an oversight. Clearly there is no need for a story to include pages and pages of economic (future) history relating to how a story’s economic situation manifested, but, similarly to the “hard” sciences, a story that incorporates a significant change in economic conditions compared to today could benefit from a (short) rationalisation of those changes.

As such, it might be interesting to briefly discuss the economic realities what (in my experience) are the most common economic developments in SF writing.

The presence of mega-corporations

The concept of an all-encompassing corporation that controls the vast majority of goods and services available for consumption is a staple idea in SF. Examples that are (hopefully) well-known include MomCorp from Futurama and (my personal favourite) Weyland-Yutani from the Alien franchise. Common to (almost) all instances of mega-corporations is an implicit assumption that anti-trust/competition law has been ignored/removed or that the competition authorities have cleared mergers that formed such large companies.

Both possibilities are significant departures from the current state of affairs – competition laws are being continually introduced in more countries, and enforced more rigorously in those that already have them.

In particular, merger enforcement is a large part of the work conducted by competition authorities; under the current state of affairs, it is unlikely that any firm would be allowed to become so large as to be classed as a mega-corporation. Moreover, even if a mega-corporation should arise, the parts of competition law that prevent firms from abusing their market power would likely ensure that mega-corporations would not be able to get away with much of their behaviour that is depicted in SF.

The lack of competition law or the refusal of anti-trust authorities to prevent mega-corporations from forming does represent a major change from the current climate, and one that does not seem to have been considered in most works. One notable exception, however, is All the Marbles by Dusty Rainbolt – in a couple of sentences, the presence of a space-based mega-corporation is explained by no earth-based competition authority having jurisdiction in space or on other worlds; a simple, yet powerful, rationalisation of the presence of a mega-corporation.

Inter-stellar trade

In much the same way as the desire for trade contributed a large part to the exploration of Earth, it is likely that trade would also drive a desire to explore space, find other worlds, and engage with other species. It is conceivable, therefore, that, similar to present day Earth, certain planets might specialise in the production of certain products, in an inter-stellar embodiment of comparative advantage. Indeed, this appears to have been present in Frank Herbert’s Dune – Arrakis specialised solely in the production of spice (creating an interesting parallel with the spice trade between Asia and Europe).

Trade in Arrakis’ spice was tightly controlled by CHOAM – a corollary of tariff barriers that prevent export. More common tariff barriers are those that prevent imports or subsidise exports and are often used by countries that are trying to develop their own industries. Would a planet that is trying to develop its own industries in the face of competition from inter-stellar trade use import/export tariffs, or would it allow unfettered free trade? Each of these different approaches have different implications for the development of industries on that planet, potentially creating different story areas to explore. One could envisage a planet that had previously imposed import tariffs deciding to open itself up to imports and being flooded with a variety of products, with a realm of possibilities for mishaps etc.

Having said that, many SF stories seem to include free inter-stellar trade, which itself introduces another (neglected) issue – that of differing currencies and exchange rates. Obviously, present day Earth has a multitude of different currencies, and there is little reason to think that the situation would be any different on other planets. How then might an Earth-based UK company purchase products it needs from a company based on another planet? Would it need to pay in the other company’s currency? Could it pay in GBP? Or might there be a separate, common currency only used for inter-stellar trade?

Moreover, would the currencies used by the respective planets be fixed (i.e. a rate of £1 to 3 Arrakis Dollars for all time) or freely floating (like the exchange rate between GBP and USD)? Each of these choices has implications for the economic situation on a particular planet – if currencies are freely floating and one currency depreciates significantly, there might be considerable hardships on that planet (similar to various currency crises on Earth). However, no story of which I’m aware that includes inter-stellar trade seems to mention these issues, despite their potential dramatic effects on a story’s environment.

Societies in which “money” is no longer required

The final example of stories that neglect economics are those that occur in societies that no longer have money or some dollar/pound equivalent with which goods and services can be purchased. My favourite example of this is the episode of Star Trek TNG in which the Enterprise comes across a group of cryogenically frozen civilians from the late 20th century. Upon reviving them, it becomes apparent that one of the civilians was a stockbroker/financier and is told by Picard that money has become obsolete, before moving on to the next statement with no explanation as to how this occurred.

Now, TNG is one of my favourite SF entities (institutions?), but the matter-of-fact statement that lacks explanation is somewhat striking (and the statement itself may in fact be incorrect, more of which below). Of course, the Star Trek franchise probably doesn’t hold up to scrutiny in the hard sciences either, but the same problem pervades other SF too.

It may well be an attractive idea that a society has dispensed with the need for money, but given the presence of money is so ingrained in modern-day society it would take a very large jump to attain such a society. Again, drawing comparisons to hard SF, it is akin to the jump from modern-day propulsion techniques to having faster-than-light travel. One would expect there to be some explanation (even a cursory one) of the latter, so perhaps the former should be afforded the same treatment.

Returning to Picard’s statement that humans had moved beyond needing money – other events in the TNG series suggest that money may not have been rendered obsolete, but merely replaced by a different form of currency.

Fairly early on, a budding economist learns that money has three functions: 1) a measure of value; 2) a store of value; and 3) a medium of exchange. It is this latter function that is taken up by other “goods” within the TNG universe. In particular, there are instances when crewmembers trade holodeck hours or replicator rations for other goods/services. Moreover, presumably the amount of holodeck hours or replicator rations traded is an indication of the value of the product/service purchased, such that holodeck hours/replicator rations satisfy the first function of money in our list. Finally, it is not a wild leap to assume that the value of holodeck hours/replicator rations does not change significantly over time, suggesting that these also satisfy the second function of money.

In other words, money had not been rendered obsolete in TNG. It seems that even the great Picard can be wrong. Maybe he should have discussed things with an economist first.

Battle of the ex-MPC members: Blanchflower vs Sentance

On the off chance that none of you have been following twitter recently, there has been something of a running disagreement (not quite a spat, but certainly not a friendly discussion) between two former members of the Bank of England’s Monetary Policy Committee (the group of learned people that, among other things, set the Bank of England’s base rate). Specifically, Danny Blanchflower and Andrew Sentance have been airing their widely different opinions regarding the state of the UK economy, and its recovery (or lack thereof) since 2008/2009. (See, for example, Blanchflower’s tweet in response to Sentance – there are plenty of others, although they do verge on the childish at times.)

By way of background, it’s helpful to note that during their time on the MPC, Blanchflower was noted as an inflation “dove” (i.e. someone who is not overly concerned with inflation as long as it was at extreme levels), whereas Sentance was one of the most “hawkish” (the opposite of a dove – i.e. someone who is concerned about inflation as (practically) the be-all-and-end-all) members.

This difference of opinion regarding the importance of inflation seems to have spilled over into their interpretation of the UK economy’s performance since 2008/2009. Blanchflower views the UK’s “recovery” since 2008/2009 as pitiful, and makes the (valid) point that it has taken over 60 months for the UK to return to its pre-2008 GDP level. Indeed, he uses the graph below to indicate that it has been the lengthiest recovery for over 100 years – each line represents the progression of GDP during each recession and recovery since 1920. The line representing the 2008-2013 recovery takes almost 12 months more than the next lengthiest recovery (1973-1976) to return to pre-recession levels, appearing to support Blanchflower’s claim. (In fact, Blanchflower makes the claim that it has been the lengthiest recovery for over 300 years, although the data to substantiate this claim have not yet been presented).

The picture is even more striking when looking at GDP per capita. Due to increases in population over time, the length of a recovery in terms of GDP per capita is increased relative to looking at GDP on its own. The graph below shows the difference between GDP per capita and its peak for each of the fours most recent recessions that were presented in the previous above. Due to limitations in the data available from the ONS, the series in the graph below are calculated using the ONS’ quarterly GDP data and their annual population data (assuming that quarterly population changes within a year are minimal).

Nonetheless, the implications of the graph are clear – it took even longer for GDP per capita to return to its pre-2008 level than was the case for just GDP: 7 years for GDP per capita versus about 5 a bit years for GDP on its own. Moreover, the difference between the current recovery and the next most lengthy is 13 quarters – i.e. just over 3 years.

GDP per capita

So, then, it appears as though Blanchflower is correct in terms of the length of the recovery. It is difficult to see how Sentance can disagree with Blanchflower on this issue.

Another matter is the reason for the lengthy recovery, but that’s for another blog post!

Anatomy of a competition authority’s merger review.

In this post, I plan to provide a brief overview of some of the sorts of economic evidence and analyses that a competition authority undertakes when it reviews a merger. I’ll skip over the boring legal and threshold-related details to focus on the main avenues of economics that are analysed. This overview is primarily based on the approach used by the European Commission and the CMA (formerly the OFT), but is generally applicable to all competition authorities globally.

Market definition and market shares.

The definition of the “relevant market” and the calculation of market shares are not the be-all-and-end-all of the assessment of the merger, but are intead used as a starting point. The first step is to define the relevant market so as to enable the appropriate market shares to be calculated. This requires the use of a thought experiment termed the “Hypothetical Monopolist Test” – this test starts at the narrowest possible market (usually the combination of the merging parties’ products) and asks the question: could the hypothetical monopolist profitably increase prices by a small but significant amount on a permanent basis? If the answer is yes, then that means that the candidate market contains all products that provide a competitive constraint on those produced by the merging parties and the test stops with the market defined as such.

If the answer is no, then the candidate market is widened to include the closest substitute products to those produced by the merging parties. The question is asked again, with the same approach taken when the result is obtained. The test stops when the result of conducting the thought experiment reaches the answer “yes”, such that all relevant substitute products are thereby included.

This test is conducted for geographic areas as well as products. The latter uses evidence regarding price correlations, product characteristics, consumer behaviour, while the former looks at price correlations across countries, transport costs relative to price of product, and the level of trade. Both are analysed in terms of demand-side substitution (where customers are switching between different products/geographies) and supply-side substitution (where the producers are switching between different products/geographies).

Once the relevant market has been correctly defined, then market shares are calculated and used as an indicator of whether the merger might lead to increases in prices (which is the usual way in which a merger might result in harm to consumers). If the combined market share of the merging parties is small, that tends to indicate that the merger is unlikely to be problematic with some minor exceptions in differentiated markets and the firms being particularly close substitutes – more on which later). Conversely, if the parties’ combined market share is high, that could indicate that the merged entity would have market power. However, this is not always the case – if market is a “bidding” one (i.e. where customers issue tenders for contracts), then the firm with the current highest share might not be in the best position to win future contracts. Hence, market shares are a useful first step, but not a definitive one.

Closeness of competition.

This is the main bulk of the analysis conducted by a competition authority and can include a number of different analyses. The aim of each is to examine the extent to which the merging parties exert a competitive constraint on each other that would be eliminated as a result of a merger (with the end result being that the elimination of a stronger constraint leads to a higher potential for the merger to increase prices).

The sorts of analyses that can be used to assess the closeness of competition between two firms tend to focus on the degree to which the firms’ customers switch between the two firms, with a high degree of switching between the merging parties implying that the merger could eliminate a strong competitive constraint. Some examples of the analyses used to determine the closeness of competition between merging parties include:

  • Price concentration analysis – this analysis looks at the relationship between the number (and identity) of competitors and the price charged by each competitor, and tries to examine if a decrease in the number of competitors (or elimination of a particular rival) would result in price increases. If the analysis indicates that the elimination of one of the merging parties would result in a price increase by the other merging party, that could be evidence that the parties are close competitors. This analysis is often conducted via econometric techniques.
  • Diversion ratios – this analysis uses observed consumer switching to try to find out what proportion of customers that left one of the merging parties switched to the other merging party (and vice versa). If the proportion of switchers (i.e. diversion ratio) between the merging parties is high, that could indicate that the merging parties are close competitors. This analysis is often obtained from firms’ win/loss records or customer surveys.
  • Cross-elasticities of demand – in order to obtain cross-elasticities of demand, a complex demand-estimation (an econometric procedure) is used (the details are too lengthy to include here, but I might cover them at a later date). A high cross-elasticity of demand between the merging parties’ products indicates that the merging parties are close substitutes for each other – if the price of one parties’ products increases, a large proportion of customers would switch to the product of the other merging party. In other words, a high cross-elasticity of demand between the merging parties could lead to a competition authority viewing the merger as problematic.
  • Upward Pricing Pressure (UPP) Indices – these tests are an attempt to combine diversion ratios and margins (i.e. prices minus costs) to obtain an indication of the incentive that the merged entity would have to increase prices. A high UPP index can occur if the parties have high diversion ratios between each other, high margins, or both, and indicates that the merged entity is likely to have a strong incentive to increase prices post-merger. As a result, a high UPP might lead to a competition authority viewing a merger as problematic.

Barriers to entry / expansion

Another important factor that competition authorities take into account is the extent to which firms already producing goods that are within the relevant market could increase their output and whether new firms could start production. If either are true, then if the merged entity were to try to increase prices, other firms could increase output and/or enter the market such that the customers of the merged entity could switch away from the merged entity. This would make the initial price increase less profitable, such that the merged entity would not have an incentive to increase prices in the first place. In this way, the presence of low barriers to entry and/or expansion would mean that a merger might be less likely to be of concern to a competition authority.

In its assessment of barriers to entry / expansion, a competition authority will take into account whether firms are able to enter/expand (since if they cannot enter, then the mechanism described above cannot constrain the merged entity); whether that entry/expansion would occur within 1-2 years (since any longer than that likely would make the merged entity’s initial price increase profitable regardless of the entry/expansion by rivals); and whether firms have an incentive to enter/expand (since without an incentive to do so, firms would not enter/expand and hence the aforementioned mechanism would not occur).

Countervailing buyer power

The final major factor that competition authorities tend to assess (although generally the assessment finds that there is little countervailing buyer power) is the extent to which the merged entity’s customers could themselves constrain the merged entity. If customers have sufficient power (usually in terms of being able to switch to different alternatives and/or bargain with the merged entity) to prevent any attempts by the merged entity to increase prices, then a competition authority might be less concerned by a merger. The assessment of buyer power tends to rely on examining the extent to which customers can switch to viable alternatives easily, the size of those customers, and whether the benefits that large customers might obtain through any power they might wield would also filter down to smaller customers.