Man dressed in historical clothing looking confused on a laptop.

Curiosity is both a blessing and a curse. Wanting to know how you compare to other practices can be so enticing that we are sometimes willing to accept assumptions that are not always supported by the evidence. How many patients a day should my provider be seeing or treating? What should I expect with respect to collections on copay amounts? How many hours (or FTEs) should my providers be reporting? How many work RVUs constitute an FTE physician? What should we be charging for our office visits? And on and on it goes.

For many practices, these questions lead directly to the desire to see what the answers are for other practices; that somehow this will help us to establish goals and objectives for our practice. This is where benchmarking comes in.

One might define benchmarking as follows: a process of measuring the performance of a company’s products, services, or processes against those of another business considered to be the best in the industry, aka “best in class.” The point of benchmarking is to identify internal opportunities for improvement. In general, this makes sense to me. The problem is defining “best in the industry” or “best in class”, especially for a medical practice. If I was making widgets, I could easily compare the number of widgets I produce a day with that of a competitor but to make the comparison valuable, I would have to know more about the practice. For example, how many hours a day do they produce widgets? Is it for one shift or three shifts? Are workers incentivized so that the more widgets they produce, the more they get paid? How about quality? What is the rejection or recall rate? And we might even want to know about the widget-producing equipment they use. Finally, from where does the data originate? Is it some third-party estimate? Does the company self-report? Doing this for widgets is actually easy but not so for the more complex medical practice.

Let’s ask these same questions when benchmarking against other practices. To begin, what other practices? Let’s say we are using the Medicare database, also known as the Physician/Supplier Procedure Summary (PSPS) database. This contains 100% of all fee-for-service (i.e., not including managed care) claims submitted to Medicare. In fact, there are millions of lines in the P/SPS table that represent hundreds of millions of claims. And in total, some 75% of those have no utilization data. Why is this? Because CMS has a filter that says if a procedure is performed fewer than 11 times by a given provider, the data are not to be published. The Public Use File (PUF) is another hotbed for benchmarking, but it’s even worse there. That file represents some one million providers, and the same restrictions apply. But because we are at a more granular level, even more lines of data are omitted.

The point is this; if the purpose of benchmarking is to compare against some “gold standard”, we first need to define what that means. And then we have to figure out a way to determine whether the data we are looking at meets the minimum criteria for that designation. And the truth is, I don’t know any way this can be done with the Medicare database.

Again, while we may know the docs that are included, based on the fact that they are participating in Medicare, we can’t aggregate them into a practice. We can’t combine the lines into associated claims. We don’t know what the culture is like at their practice, nor can we answer any of the other 25 or so questions needed to control for spurious variables.

In the end, my opinion is that we should consider internal benchmarking. That means establishing a goal that is reasonable and doable and moving in that direction. For example, maybe your minimum tolerance for collecting copay amounts is 75%. Then go for that. Who cares what another practice is collecting? Again, you have no way to know their collection methods. Or how to figure out from a financial goal, how many patients a day the doc needs to see based on the average revenue per patient. Then, work towards that goal.

Not only is external benchmarking unreliable, but it can also be harmful in the long run. So rather than worry about what others are doing, set your own goals and spend your resources working to achieve them. And that’s the world according to Frank.

Author: Frank Cohen

Frank Cohen Headshot

Frank Cohen is the Director of Analytics and Business Intelligence for DoctorsManagement, LLC. His areas of expertise include data mining, applied statistics, and predictive analytics. In addition, he provides compliance risk analysis and meaningful assistance to healthcare organizations in the areas of process improvement, compliance, quality and profitability. 

To learn more, visit our compliance service pages of our website.  To speak with Frank or one of our compliance professionals directly, please call 800.635.4040 or submit a form on one of our service pages.



Call Us (800) 635-4040