Publications and Vanity Metrics
Are scientific publications a vanity metric?
Last week, while giving an informal talk about Disruptive Science as part of the McGill Dept. of Dentistry OHS series, an interesting thought emerged out of our discussion- could it be that scientific publications, long considered the gold standard of evaluating research output, are a vanity metric?
First of all, what is a vanity metric? I believe the term was first used by Eric Reis in The Lean Startup, although it may have been borrowed from elsewhere. Briefly, a vanity metric is some sort of measurement, often in easily-digestible form, that can convincingly show the illusion of progress in a venture. (If you already know what I’m talking about, you can skip the next two paragraphs.) Vanity metrics are the opposite of actionable metrics: measurements that have hard meaning, and where improvement conclusively demonstrates good progress. Consider the classic example of an online merchant, who tracks three things: distinct visitors, number of paying customers, and total revenue per customer per year. Imagine that in the early life of your website, you see a large increase in traffic: from 2,000 visitors in March to 15,000 visitors in June. Great success, right? Well, not necessarily- unless the whole point of your website is to attract page views (like an online news source), you’re probably trying to get your customers to do something once you’re on your website; in this case, buy stuff. That growth from 2,000 to 15,000 visitors tells you nothing about whether your website is providing a valuable service to anybody; all it tells you is that more people are arriving at your website and maybe spending some time there, but not that they find any value in what you’re offering. On the other hand, going from 50 paying customers to 400 paying customers over a few months is a more actionable metric- this gives you a far better indication that yes, customers want what you’re selling and are willing to pay for it- and your growth reflects this demand. Finally, imagine that in the first year of your website’s life, customers spent an average of $200 on your website to make 1-2 purchases per year, and in your third year of existence each customer is spending an average of $800 to make 4-5 purchases/year. That’s how you really know you’re making progress- if customers keep coming back to give you more of their money, you’re probably doing something right.
Now consider a situation where your website goes from 2,000 distinct visitors and 50 paying customers in March to 15,000 distinct visitors and 100 paying customers in May. Are you making progress? Well, it depends on what you look at. If you only look at your visitors, that’s a 7500% increase! Fantastic! If you only look at paying customers, though, you have a 100% increase- still good, but not as good. Now, for a more sobering analysis, look at the number of distinct visitors it takes for one of them to make a purchase. You went from 50/2000, or 2.5% of your visitors converting to paying customers in March, to 100/15000, or 0.6% of your visitors converting in May. That sucks! If your stated goal had been, ‘get more people who go to the website to decide to buy something’, then you’d be failing big time. Of course, if you only focus on the rosy numbers, you might think that everything’s going fine, when really you’re having serious problems with your customer conversion rate. Hence, the dangers of vanity metrics.
Now back to science. I’ve argued in earlier posts that if the overall purpose of life science research is to improve the overall health of the population and drive progress in health care, then categorizing research into Sustaining Science vs. Disruptive Science can explain why we’re generating so much research output in some areas yet making so little progress in patient outcomes. One of the hallmark features of Sustaining Research is that it generates a lot of research output, as measured in publications, because all of the incentives point towards high publication rates. On the other hand, Disruptive Research often leads to less publication (since more of your ideas will go belly up and fail), but with a select few that become hugely influential. As a great example, I heard yesterday that Peter Higgs, who just won the Nobel Prize in physics the other day for work on a fundamental particle that ultimately got named after him (the Higgs Boson) only had seven research publications in his lifetime. Seven! Yet, were he judged by life science standards, we’d look at him and think, ‘Low H-index; he’s probably never accomplished much.’ Yeah, sure. On the other hand, you constantly see areas of research where the publication record has gotten so massive it takes an entire 6-year PhD just to read it all, yet we aren’t much closer to any real progress in patient care as a result. In fact, I wouldn’t be surprised if there were hardly any correlation between the sheer amount of research output, as measured by distinct peer-reviewed publications, and some objective measure of health care progress (I’m not actually sure what you would use, actually- subject of a future post perhaps?).
This is why I’ve been grappling with the following thought: what if life science publications are to health care what page views are to an online business? That’s kind of a scary thought. The entire academic research machine has been tuned to maximize publications, because that’s where all the incentives currently point. Could it be that we’re all unwittingly acting like a company optimizing their web site to maximize traffic, while forgetting that the whole point of the site is to generate revenue? It sure could. While it may be in health care’s best interest for us to publish less stuff that is more useful (i.e. Disruptive Research!), even the most basic understanding of game theory will scream that the individual actors playing the game- academic researchers- will all act in their own best interest, which is to publish as much as possible. When viewed through that lens, publications squarely fit our definition of a vanity metric- they reward the individual actors involved (scientists) with the perception of progress, while in the grand scheme of things not actually mattering one bit for health care outcomes.
The problem with this assertion is that, on a local level, publications aren’t vanity metrics- they are actually the goal. At least, they’re the goal in a sustaining research paradigm. Unlike in business, where fine-tuning for vanity metrics will see you fail within a few years (hopefully to learn from your mistakes), there have been no shortage of very successful scientific careers that have been entirely based on sustaining research publications. When junior researchers have to compete against those established labs for grant funding, they have no choice but to play the sustaining research game in order to build a foundation for their career, reinforcing the cycle.
Going forward, in order to fix this problem it seems as though we need a metric other than publications to judge research output. The problem is that unlike in business, where successful planning an execution can lead to return on investment and subsequent profits in a short period of time, sometimes it takes decades for research breakthroughs to actually translate into measurable impact on health care. How can you evaluate someone’s performance in real time if the consequences of success or failure will only be known in the future? We’ll need to adjust the metrics we use, and get creative with things we can easily measure right now. There are certainly no shortage of smart people in life science research that could easily adapt to a new structure of progress evaluation; it’s just a question of getting the incentives right.