In the United States, despite threats from presidential hopefuls Sarah Palin and Mike Huckabee to “hunt down” and even execute Julian Assange, the founder of Wikileaks, it appears that the government may well be imitating him (which we all know is the sincerest form of flattery).
I refer to the US Federal Reserve’s US$ 3.3 trillion midweek data dump – showing which firms received payouts from the numerous support programmes put into place during the financial crisis.
If you navigate beyond the alphabet soup of acronyms used to denote the different bailout mechanisms – AMLF, TALF, PDCF, CPFF, TSLF, TOP, TAF, etc. – you still face a serious case of large number fatigue, both in terms of the dollar sums involved and the number of electronic files that you have to master in order to interpret the data.
Ironically, despite the magnitude of its data release, the Fed is already being accused of withholding the most important figures which the Dodd-Frank bill had required it to produce.
One effect of releasing such a volume of information may be to stun recipients and perhaps cause them to forget what they had asked for in the first place – if so, there are parallels with the June disclosure of documents by Goldman Sachs to investigators: in other words, perhaps a case of “if you can’t beat them, join them” by the Federal Reserve.
Goldman famously dumped 5 Terabytes of data (that’s around 2.5 billion printed pages) onto the desks of the 50 staff of the United States Financial Crisis Inquiry Commission in June, in response to the commission’s repeated requests for details of the firm’s transactions in the CDO market. Here’s the FCIC’s summary of its lengthy attempts to get information from the bank: Goldman’s tactics in response might best be described as ‘stalling’ followed by ‘nuclear’.
The bank reached a US$550 million settlement over civil fraud charges with another federal agency, the Securities and Exchange Commission, just over a month later. Perhaps the SEC’s staff took a look at the creaking desks of their fellow bureaucrats over at the FCIC and decided that a settlement might be a better option, and certainly less work, than pursuing the case.
But what has all this got to do with ETFs?
Take the Euro Stoxx 50 as an example – Europe’s most popular index for exchange-traded funds.
This index has eighteen European ETFs tracking it with sixty different individual listings. In turn, these funds are traded on eleven stock exchanges, and that’s before mentioning other trading venues such as MTFs and dark pools. Two-thirds or more of trading activity also reportedly takes place away from official exchanges, meaning that turnover and price data are either completely unavailable to the public or only partially reported.
The net result of all this is that it’s fiendishly difficult to collect all the relevant information and to produce usable Europe-wide data on factors such as turnover, market share, bid-offer spreads and liquidity. The valiant attempts which have so far been made to aggregate and analyse trading data from across the region struggle first of all with the sheer weight of data sources that one needs to aggregate. According to ETF specialists, it is then difficult both to ensure ‘completeness’ and to avoid errors such as double counting.
New regulations should help – great expectations are being placed on the second version of Europe’s MiFID rules, which are due to come into force by 2012. MiFID II aims to enforce compulsory post-trade reporting and several data vendors are already working on their versions of a pan-European “consolidated tape” of transactions.
Even if MiFID II performs as hoped for, a great deal more will be needed to produce comprehensive and meaningful region-wide comparisons between different exchange-traded funds. What is needed from the analysts putting everything together? The necessary skills would undoubtedly include product knowledge, excellent quantitative and communication skills and, most importantly, computer proficiency – the ability to search and analyse all those gigabytes.
Clearly, it’s an enormous commercial opportunity for whoever gets this right. But – as the frequency and volume of enormous data dumps increases, I’m sure I’m not the only one suffering from information overload.