New news on yesterday’s post – turns out that ‘costs’ are not based on billed charges, but on payments. Unfortunately, the report doesn’t make that clear – nowhere in the report does it define ‘costs’ as payments, it does state costs are based on billing data, and in the data sources section it explicitly links cost to billing data.
Here’s where the confusion lies.
On page 2, the report reads “Utilization measures represent the services that were billed by health care providers, regardless of whether those services were ultimately paid by insurance carriers. Duplicate medical bills and bills that were denied due to extent of injury or compensability issues as well as other outlier medical bills were excluded from the analyses. Cost and utilization measures were examined separately by type of medical service…”
Note there is no differentiation between utilization and cost, and no specific definition of ‘cost’. So, perhaps that’s a bit misleading.
But wait, there is another statement that certainly seems to describe how ‘cost’ figures were derived:
In the data sources section (also on page 2), the report reads “Medical cost, utilization of care, and administrative access to care measures were calculated using the Division of Workers Compensation’s medical billing data [emphasis added]. Seemed pretty straightforward to me.
Unfortunately I wasn’t the only one confused; two large payer clients interpreted the statement the same way I did.
My post generated a good bit of excitement at the REG and among stakeholders. Bill Kidd reported on it at WorkCompCentral, where DC Campbell, director of the department’s Workers’ Compensation Research and Evaluation Group, was quoted as saying “”Paduda expresses concern about the results since Coventry covers ‘much’ of the non-network claim population. It’s not clear from his statements [emphasis added] whether this refers to Coventry’s market share in terms of utilization review activities, bill review activities, contractual discounts outside of certified networks, etc.”
I’m not sure where the confusion lies, as I clearly referred to ‘networks’ in the post yesterday…
For example, several of the networks are based on the Coventry work comp network – Liberty, Travelers, and Texas Star (the Star network was designed by Texas Mutual, and is much smaller than the overall Coventry network). There was significant variation among and between these three Coventry networks, variation that may well be due to the relatively small sample size and relative “newness” of the claims analyzed – the claims haven’t developed sufficiently to draw ‘conclusive conclusions’.
The net is the report uses payments for all cost calculations. Thanks to Amy Lee and DC Campbell for setting me straight. OK, now that that’s behind us, I’m still not sure what to make of the report’s findings. According to the report, claimant demographics were accounted for, I assume to enable fair comparisons among and between the various networks. Yet the report didn’t note that three of the networks are provided by one company – Coventry, which also administers a network that is likely underpinning much of the ‘non-network’ category.
Consider that Liberty’s average medical costs were lower than non-networks in 4 categories, and Coventry’s in 3, yet the Coventry network was utilized by all three entities. And that’s just one of the findings. Claims in the Coventry network had higher overall medical costs than non-network claims, as well as higher hospital inpatient and outpatient costs. Both Coventry and Travelers network claims had higher inpatient utilization than non-network claims, but Liberty’s was lower. Coventry outperformed non-networks in release to return to work, but Liberty and Travelers underperformed non-networks.
So, what does this mean for you?
It sure doesn’t look like one can draw any meaningful conclusions from the report’s findings.
Kudos to the Texas REG and their supporters for funding and conducting the research. More time for the data to mature, more clarity on definitions, more disclosure about the similarities among the networks being studied, and more discussion about possible reasons for the disparate results from all-but-identical networks and their work will be much more useful.
Joe –
Since the people at TDI seem to follow your blog, you might suggest to them that the “Report Card” is just plain insignificant. The categories of comparison are so general as to be meaningless for any stakeholder or policymaker. For example, what is the value to the student that utilization costs were included in the analysis whether or not they were ultimately paid? That makes no sense.
I notice that the drafters of the study kept saying that the billed data file was not adequate! If TDI requires all payers to submit their data file in the ANSI 12X 837 version 4010 format, there is a field for claim payment! That field has been in the claim data file adopted by TDI’s predecessor – TWCC – back in 1993! Strikes me that there hasn’t been much progress in data collection and analysis over the past 15 years.