Does Bad Research Beat No Research? Durbin Amendment Data
Todd Zywicki, Geoff Manne and Julian Morris have an article on the effect of the Durbin Amendment. Sigh. No surprises here. Zywicki et al. are making claims beyond what their data can support and in fact directly contradicted by their own data, which shows that some of the "effects" of Durbin preceded the enactment and effective date of the Amendment.
The main data points for the first part of the article come from Bankrate.com's annual checking account survey. Zywicki et al. don't bother to explain the data or methodology in the Bankrate survey. Instead, they just act as if it is authoritative. But here's what the survey covers (for the 2012 survey):
The data come from surveying the five largest banks and five largest thrifts in 25 of the nation's biggest markets from July 24 to Aug. 10, 2012. We asked those institutions about terms on one generic noninterest account and one interest-bearing account for the general consumer.
This data is fine for what it is, but let's note that it's covering perhaps 10-30 financial institutions (I can't tell if it is the 5 largest banks and thrifts overall or the 5 largest in each market, but even if the latter, it's likely to be significantly the same). Durbin covers around 100, and there are almost 8,000 total in the US. Within the 10-30 institutions surveyed, the data is about two generic types of accounts. The problem is that banks offer a whole range of accounts. So we've got a small and possibly unrepresentative sample of banks reporting an unrepresentative sample of accounts. This isn't data on which one can bank, so to speak. But this is the data source for four of the first five charts in Zywicki et al.'s piece and two of the five main summary bullet points regarding the availability and minimum balance for fee-free accounts.
How about the data Zywicki et al. rely upon for their third summary bullet point, a claim that Durbin resulting in more than a doubling of average monthly fees? The data source on this is MoneyRate.com. What is this data? Again, Zywicki et al. don't explain, but like Bankrate, this isn't a scientific sampling:
The data in each semi-annual MoneyRates.com Bank Fees Survey is based on the MoneyRates Index, a sampling of 100 banks that includes 50 of the nation's largest banks and an equal number of medium-sized banks. [A description of the MoneyRates Index is here.]
Even if we were to assume that the MoneyRates.com sample was representative in terms of institutions, there's no indication about what data is really being collected. Data on all accounts or on just a sampling of accounts? If it's anything like the BankRate.com survey, which is my hunch, then it's not a representative sampling of accounts. And it sort of has to be that way because of the lack of standardization of checking accounts. Lots of accounts are "free" if. If one maintains a minimum balance, has direct deposit, does less than a certain number of transactions, etc. Is that a no-fee account or a fee-account? That's a judgment call, and that makes it hard to survey.
Even more interesting, however, is how Zywicki et al. interpret the MoneyRate.com data. In their own words, Durbin Amendment-covered banks have "[d]oubled average monthly fees on (non-free) current accounts between 2009 and 2013, from around $6 to more than $12." That statement is a factually true description of what the MoneyRate.com data (for whatever it's worth) shows. But it is totally misleading. The jump from $6 to $12 occurred between 2009 and 2010, before Durbin was passed and before Durbin went into effect. Zywicki et al. surely know this, yet their article implies that the jump in monthly fees was due to Durbin, when the data they have shows exactly the opposite.
I haven't fisked the rest of this piece, but given what I've already found (and past experience with one of these authors on data points), I'm skeptical of any claims made by the authors. But perhaps we should keep in mind Todd Zywicki's recent dictum that "bad research beats no research." Q.E.D.
To be fair, I think what Todd meant by "bad research beats no research" must be that if one side has numbers and the other side doesn't, the side with the numbers wins. As an observed truth, I think that's correct. (It also looks like a plea for research funding from the financial services industry.)
I would add a corollary to the Zywicki proposition, that "bad research" can offset "good research" by confusing non-expert bodies like Congress and the courts and setting up a he-said/she-said battle of the experts. This is something that is regularly on display in expert reports in litigation, but also distinguishes why banks won on BAPCPA (the "phantom $400") and cramdown (the MBA's 200 basis point across the board increase in the cost of credit) and didn't on the CARD Act or Durbin (they didn't make up numbers).
So this says what the anti-regulatory strategy should be: find (or buy off) some ideologically sympathetic academics (or other reasonably credible folks) and get them to produce some studies indicating that regulation will result in all kinds of negative "unintended consequences". When dealing with courts (Business Roundtable v. SEC) or Congress, this will work. It's less clear if this will work with the agencies, which are hopefully expert enough not to be fooled, but that seems to be the goal of the new Research Integrity Council.
Posted by: Adam | June 16, 2014 at 08:41 AM
The link to the article is not working for me. Could you check it? Thanks!
Posted by: Aurelya | June 25, 2014 at 02:54 PM
Adam:
As I understand it, you have two basic points. The first is that the Bankrate.com data is insufficiently comprehensive to be useful; the second is that the Moneyrates.com data on the increased monthly maintenance fees does not support our contention that the increase in bank fees is attributable to the Durbin Amendment. Let me respond to both.
First, on the Bankrate.com data—I’m not sure what exactly the point you are trying to make here. You don’t seem to contest the claim that bank fees have been rising or that free checking has been retreating. You don’t offer any contrary evidence. So what exactly is the takeaway of your critique—that there is some reason to believe that the data are inaccurate?
Why do we think it is useful? First, it is reasonably broad—250 institutions across 25 markets. Second, it is the most comprehensive data set we’ve been able to find in terms of the length of time it reports, using a consistent methodology. Third, during the period preceding the Dodd-Frank it shows a consistent upward trend in the availability of free checking, which comports with conventional understanding, and which suggests that the study is not biased.
But more fundamentally, while the data set could be thicker, you provide no reason to believe that it is inaccurate. For example, do you think it is systematically biased for some reason? If so, does it overstate or understate the amount of free checking and the size of bank fees? And if it is systematically biased, is your position that it was also systematically biased pre-Durbin? The purpose for which we use it is to chart the trend, not the precise numbers—so unless you think the data is, and always has been, so flawed as to be useless, then your point seems to be simply rhetorical and grounded in any real empirical critique. If you have some reason to believe that it is systematically biased or so unrepresentative of a sample to be useless, then that would be useful information. If we were relying on it to make claims about precise points, that would be a useful critique. But we are using it for the limited purpose of drawing an apples-to-apples comparison over time, in which case I am afraid that nothing that you say is actually relevant to that.
This can be contrasted, for example, with the terrible original Payday loan study that the CFPB did, which quite obviously is systematically biased toward overstating the number and length of payday loan renewals (as the CFPB itself subsequently implicitly admitted). Or the credit card complaint database, which makes no effort at even pretending like it is a representative sample.
As to the Moneyrates.com data, the issue here is that you have to look at the underlying data (which we cite and which is freely available) to see what is happening. The Moneyrates.com data is collected on a biannual basis, but we presented it in the chart (Figure 4) on an annual basis for consistency with the other charts (and it didn’t occur to me that this might lead to misunderstanding the chart if someone didn’t look at the underlying data).
Recall that Dodd-Frank, including Durbin, was enacted in July 2010—so almost perfectly at the midway point of the year. Moreover, the Durbin Amendment is a fairly clean test in that it was somewhat of a surprise to everyone; while there was some discussion of doing something on interchange, it was much more moderate (such as an exemption from antitrust laws). But the highly severe version of the Durbin Amendment came largely as a surprise. As a result, you wouldn’t expect to see much anticipatory movement prior to Durbin, but might see large adjustments after. So while, in general, you’d expect to see movement before the law was passed, in anticipation of the law, that might not be expected here (I’m assuming you weren’t actually arguing that there couldn’t be movement in variables in anticipation of a law being passed, just because the law happened to come later).
If you go through the various Moneyrates.com reports here’s what you find:
EOY 2009 $ 5.90
Mid-2010 $ 5.85
EOY 2010 $12.23
Mid-2011 $11.75
EOY 2011 $11.28
Mid-2012 $12.08
EOY 2012 $12.26
Mid-2013 $12.43
EOY 2013 $12.54
Note that it is precisely in the second half of 2010—the period immediately following the enactment of Dodd-Frank, that the fees shoot up, consistent with the hypothesis that it was a response to the Durbin Amendment. (Also note the smaller jump in the first half of 2012, the first full period after the Federal Reserve’s Regs became effective.) To be sure, some of the increase in bank fees could be a result of Dodd-Frank’s other regulatory provisions and the creation of CFPB; those factors, however, presumably would have been more anticipated and as a result, one might’ve seen an increase in fees prior to the final half of 2010 (although Dodd-Frank might’ve been more expensive than anticipated). Durbin, by contrast, was largely a surprise. I don’t think the big banks would’ve been so foolish as to wait until the actual Federal Reserve rules became final before they increased bank fees.
Having said that, we are going to revise Figure 4 to make it clear that the jump in costs occurred in the second half of 2010 so that others who do not access the underlying data are not confused (it didn’t occur to us at the time to need to take that precaution).
So if those are your two data critique, I don’t think either of these criticisms of the data are really very much on point (and the second is incorrect on the actual data).
One larger point though on which I'm a bit confused--I thought that the informed position on this issue (as in Australia) was that the whole point of interchange fee price controls was to make payment card prices more "transparent" to bank customers by forcing them to pay more for cards. Shouldn't you be applauding the findings in the paper? Isn't this consistent with what Australia was trying to do and in fact claims to have done--increased the cost to consumers of using cards? When I was in Europe two weeks ago the folks in Brussels told me the same thing—that the point of this was to make costs for consumers “more transparent.” Are you disagreeing with that? If the costs don’t get transferred, then where do they go?
Posted by: Todd Zywicki | July 02, 2014 at 11:01 AM
Todd,
First the data, then the interpretation:
There are two problems with the BankRate data. First, is representativeness, and second is what it actually measures. The BankRate data is reporting from 10 institutions in each of 25 markets. That is NOT 250 institutions in 25 markets, as you claim. This is not as simple as 10*25. Instead, it is the 10 largest or so in each of the 25 markets. It's a pretty good wager that at least 4 of the institutions are the same in all or almost all of the markets. This representativeness problem particularly matters for Durbin because only around 100 banks and CUs nationwide, out of almost 16,000 are subject to Durbin. If the reporting is only on the largest banks in the various markets, it is reporting on banks subject to Durbin, which doesn't tell us anything about the banks not subject to Durbin. So there's a real problem in drawing any conclusions from this data.
The second problem is what is actually being reported. BankRate is reporting on fees/terms of a single type of account per institution. But institutions have lots of different types of accounts with lots of different terms. There is no reason to think that this sort of sampling is in any way representative--or not. The point here isn't that the data is wrong. It's that it's not reliable in any way. If one is going to use this data, one needs to acknowledge its serious shortcomings, and it's really not responsible to be basing policy recommendations on this sort of data.
This second problem also appears to exist for the MoneyRates data, and you do not address it in your comment. Are the data reported for all accounts or only for a subset of accounts, and if a subset, how is it chosen? I don't think you know, and if you don't know what the data is actually reporting, how can you reach conclusions about what it means? (And if it is a subset, there are then representativeness issues.) This is a fundamental analytical problem that you haven't addressed.
Your points about the CFPB's data collection are really beside the point, but I'll make two observations on them. The complaint database is not meant to be a representative sampling. It's an intelligence gathering tool, not a diagnostic tool, and it's not a stand-alone tool. It's a piece of a larger informational gathering process for the agency. Therefore, representativeness isn't really that important. But, if you're a fan of getting better and robust data, perhaps you'll start lobbying for a CFPB exemption from the Paperwork Reduction Act. If we want more empirical regulation, we have to allow real data collection. Otherwise, we're just boxing the agency into a no-regulation corner. (Or is that the plan?)
Again, the MoneyRates data just doesn't show what you claim. The semi-annual data shows a jump to a peak at mid-2010, then a decline until mid-2012, after which we have continual rises. If Durbin is the causal factor, none of this makes sense. First, the giant jump in 2010 is really hard to pin on Durbin specifically. On Dodd-Frank generally, perhaps, but no one knew what Durbin actually meant in 2010. Then, there's no explanation of the declines in 2011, and there's no explanation of the steady increases from EOY 2011 to present. Durbin hasn't been changing, so there's clearly other factors at work here.
On the "larger point," the idea I support is to make fees more transparent. I've got not problem with transparency. But that isn't the argument you're making. You're arguing that fees _rose_ because of Durbin, rather than that they became transparent. But transparency is not the same as transfer. The reason that transparency matters is that it enables market discipline. (You like market discipline, don't you?) And once there is market discipline, there is not likely to be a dollar-for-dollar transfer. Instead, there will be a reduction in the rents collected by the banks. That's exactly what we saw with the CARD Act, and there's no reason to think it wouldn't be the same with Durbin: subject fees to market discipline through transparent pricing, and the fees should come down.
Finally, I'm not sure where to start in responding to this line: "I don’t think the big banks would’ve been so foolish as to wait until the actual Federal Reserve rules became final before they increased bank fees." Why not? Big banks don't do stupid things ever? [Hint, remember 2003-2008?]
There's a real paradox in your credo: if the banks were able to increase their fees without losing business to the point that the fee-increases would be offset by the lost business, wouldn't they have done so already? If you're going to argue that the banks have to offset their Durbin losses with higher fees elsewhere, that would imply that there's a set amount of revenue the banks must to have to operate. (I take issue with such an assumption, but that's a longer argument). It also implies that there cannot be anticipatory fee raising or, if there is, it suggests that the banks are in fact foolish and have been leaving lots of money on the table all along because they could raise deposit account fees irrespective of interchange income. So which is it? Have the big banks foolishly been leaving money on the table all this time, or did they only respond to Durbin once they knew what the rule was?
Posted by: Adam Levitin | July 02, 2014 at 11:39 AM
Adam:
On that Bankrate data, that's exactly the point--to try to capture the Durbin effects through time. Sullivan's study compares the Durbin effects to the non-Durbinized banks specifically and we present other data in the paper that shows that the availability of free checking has not declined at non-Durbinized banks. So if it oversamples large banks then that actually reduces any problems of non-representativeness of trying to capture Durbin effects. The use of that data is to try to capture the effects through time using a consistent data set.
Elsewhere in the paper we discuss Sullivan and others contrasting Durbin with non-Durbin banks.
The Moneyrates data shows a more than a doubling in fees from the pre to post-Durbin era ($5.85 to $12.23) then a slight decline (less than a dollar) then a jump again after the final rule was issued. I don't think that a decline of less than a dollar after an initial $6.50 jump--or to put it in annual terms, a decline of about $10 after a jump of about $75 in one period suggests that the initial jump wasn't significant (speaking, of course, in casual, not statistical significance terms).
As for the CARD Act, it did not reduce rents; once combined with the effects of the Fed Regulations that preceded it, it is clear that it led to increased costs and reduced access to credit for higher-risk borrowers. I will be soon posting a paper that reviews all the evidence on that point, but they are all available. The only papers that have concluded that there was no price effect are those that fail to control for the Fed Regs that preceded the Credit CARD Act (some of them also don't consider output effects either, such as reduction in credit to higher-risk borrowers). Suffice to say, the bulk of the evidence does not support the hypothesis that the Credit CARD Act merely reduced rents as opposed to having an efficiency effect.
As for the profitability point, again that is exactly the point--the big banks did lose business after they increased fees because of Durbin. Lower-income consumers became unbanked. Consumers in higher-income areas migrated to non-Durbin banks according to Sullivan. Durbin changed the economics of the industry for large banks--they used to be able to bank relatively small-dollar consumers who did not use additional banking services because debit card revenues were sufficient to cover their marginal cost. But post-Durbin they instead have dropped those now-unprofitable customers (and cut costs related to banking those customers, such as grocery store branches) and shifted their emphasis to those with higher balances or to whom they can sell more products. In other words, now that those customers are no longer profitable they can either pay the fees or walk, and the big banks don't really care if they walk now. So it isn't just about the revenues generated from the fees, it is about who doesn't pay the fees, namely, those who choose to drop their accounts and not pay the fees (and either shift to a less-preferred bank or become unbanked).
So Durbin shifted up the supply curve for large banks, resulting in a lower equilibrium output and higher prices. How is that inconsistent with standard economics? If we impose a new environmental regulation on paper mills, the price of paper goes up, the output goes down, and we have a new equilibrium price and quantity. Does that demonstrate that the paper industry could've charged a higher price pre-regulation? Of course not. That's our point here--there was both a price and quantity effect, both in terms of shifting to unregulated banks (as Sullivan shows, which isn't surprising) or becoming unbanked.
Posted by: Todd Zywicki | July 02, 2014 at 06:18 PM