I think we’ve figured out why, in the face of compelling evidence to the contrary, plaintiff attorneys in California continue to argue the work comp system mis-serves a large number of workers’ comp patients.
Before we delve into this, allow me to stipulate that many applicant attorneys are likely well-intentioned, seeking to do good, and may well believe that a lot of work comp patients are ill-served by the work comp system. Since they only talk with work comp patients that have complaints, that would not be surprising. And, a relatively few work comp patients are, indeed, ill-served by the system for a variety of reasons – a bad employer and/or boss, crappy doctor, under-trained and/or over-worked claims adjuster – even a lousy attorney.
That said, it appears their representative organization, the California Applicant Attorneys’ Association, is not conversant with research methodology, processes, or statistics – and that’s why they don’t understand that the work comp system is working pretty well.
I draw this conclusion after reading an article entitled “Calling all Applicants: The Injured Worker Survey” from a July 2016 CAAA publication. In the piece, author Richard Meechan argues:
“Nothing makes sense – up is down and they (the Committee on Health and Safety and Workers’ Compensation, or CHSWC) have graphs and charts to prove it.”
The “Injured Worker Survey” Mr Meechan refers to will apparently enable the CAAA to:
“see how the system is working for the most seriously injured workers. That would be workers that were out of work for more than a year, our clients, to be more exact.”
This is because the CAAA apparently doesn’t (want to) believe the myriad research studies published by research organizations about injured worker outcomes and related matters.
If you, dear reader, are puzzled by this, allow me to explain. Careful and valid data analysis by experts examining credible data sets can be, and often is, translated into “graphs and charts” to help the non-statistically-endowed understand what is really going on.
In the article, Meechan states he is skeptical of research finding “95 percent of medical requests were approved and that injured workers were satisfied with their medical treatment.” That skepticism resulted in the CAAA’s enlistment of three attorneys to help the CAAA committee on Health and Safety figure out how to “respond” to these “tales” (referring to the research presented at CHSWC meetings).
While I could find no evidence that any of the enlistees have an educational or experiential background in statistics, statistical analysis, business analysis, the physical sciences, or operations management (heavy in analytics), one of the three did study economics back in the nineteen-sixties. This isn’t to denigrate the trio, rather to contrast their relatively modest scientific research and statistical analysis credentials with those of folks who actually do research. Like CWCI. And WCRI. And RAND.
Using SurveyMonkey, the CAAA is conducting their “survey” and will likely publish “results” in an attempt to show the information presented at CHSWC meetings, based on reams of research published after hundreds of hours invested in very sophisticated analytical processes employing highly-refined datasets and tested methodologies vetted by actual, real, live, statisticians with decades of experience and darned impressive credentials in data analysis and everything that goes into it is, well, wrong.
And CAAA will do this based on responses from an on-line, open access survey with no data validation or proof that you are actually an “Injured Worker” needed.
Hey, you can try it yourself, here.
So here’s where the problem lies.
In the article, Mr Meechan notes that fewer than one percent (33) of the 3700+ survey responses asserted they had been out of work for a year or more. Apparently that is concerning. Mr Meechan asks others to help get the word out, as “one hundred responses is the gold standard for surveys and we are short.”
That single statement demonstrates a complete lack of of even a basic understanding of statistics. Mr Meechan is apparently confusing statistical validity with an arbitrary “gold standard”. Further, there’s an assumption that all that is needed is 100 SurveyMonkey responses from respondents who claim to have been injured and out of work for more than a year, and he and his associates will have what they need to refute all that science stuff CHSWC throws up there on the screen.
As anyone who has one day of stats knows, without valid underlying data to start with, the whole exercise is pointless.
More directly, garbage in, garbage out.
And in this case, the underlying data is, indeed, meaningless. A gazillion monkeys could be typing away and deliver lots of “results”. Some whizkid could figure out how to program a bot to fill them out with no human intervention at all. More prosaically, a bunch of law clerks could earn some extra hours banging away on laptops or iPads completing SurveyMonkey surveys.
In this instance it is indeed possible that some or most of the respondents at some point had an encounter with the work comp system. Or not.
I belabor this point not to embarrass the attorneys, for that is NOT my intent. Rather it is to point out an obvious conclusion:
As reform opponents think that a SurveyMonkey random survey will be more valid than real research studies conducted by experts, we now know why “nothing makes sense” to them.
They don’t have a clue.
They are totally, fundamentally, and blindingly ignorant of even the most rudimentary statistical terms and concepts.
Note – I don’t have a link to the original article. sorry – ask CAAA for your copy.
“see how the system is working for the most seriously injured workers. That would be workers that were out of work for more than a year, our clients, to be more exact.” This is a good one! I have never really been able to correlate this one; in my experience I have noted that severity really has nothing to do with being out of work for more than 12 months! There are a multitude of other factors that affect return to work, they are at the heart and culture of employers throughout the entire spectrum.
Where is the research that is not funded by employers or insurance carriers?
JG – thanks for the comment. I believe that would be RAND, however I’m not sure I follow your question. Do you have a concern that the basic research design, methodology, and vetting of the other research is somehow deficient? If so, what concerns do you have?
thanks
I’d like to suggest that rather than resorting to crude surveys and rancor, why not give the raw data that is the basis for the various studies plus a description of the process that was followed to a second or even a third competent group to see if they come up with the same conclusions as the initial group? That’s the scientific method, is it not? It would be more time consuming and “expensive” perhaps, but likely no more time consuming and expensive than the mistakes and unintended consequences that result from biased, single sources. If everyone has the same data, questions of bias and accusations of “research by conclusion” would be diffused.
Another alternative is peer review by competent, independent third-party experts. Again, it might be time consuming, but similar to evidence based medicine, peer review and grading of the studies should yield better results.
Transparency, intellectual honesty and a lack of bias would be a good thing for everyone involved. Agreed?
Steve – thanks for the comment.
Actually, giving data to another party is not the “scientific method.” That is a much broader topic area, encompassing the entire scope of theoretical inquiry, data collection, observation, analysis, and review.
Moreover there is quite a difference between research for purely scientific purposes and statistical analysis of existing datasets. From your statement, it appears that you are concerned about “biased, single sources.” If the data itself is biased, then no analysis will produce an unbiased result.
As to your concerns about questions of bias on the part of the analysts, one has to have some credible reason to assert bias; merely claiming it may exist is insufficient.
Finally, as one of the many peer reviewers asked to assist two of the research organizations, I can assure you that this process is rigorous and diligent indeed.
I do agree that intellectual honesty and a lack of bias would be good. I also believe that before you criticize a credible research organization, you should have some solid ground on which to base that criticism. To date, I have heard nothing of the sort – at least none that passes the tests of intellectual honesty, lack of bias, and basic understanding of statistics and analytics.
If not the scientific method, what is your opinion regarding provision of the data to others? Why not do so?
Steve
Thanks for the response. I don’t follow the rationale. What data are you referring to, and why would it make sense to provide those data to some other party?
One of the biggest issues for the IW involves not receiving a call back from the MD office and the AA’S office, thus leaving the IW frustrated, anxious and angry!
I know this from experience.
Thanks for your critique. Our criticism of the CHSWC is not with the science, it’s with the cherry picking of the data from those studies. The public simply do not get complete copies, with tabulations, to be able to verify the information published.
I believe that CHSWC staff has a political mission for the administration. That mission is to show the success of their tinkering with the medical delivery system. My experience says its a disaster. We are looking at data to see if it is in fact working. The reason we seek workers out more than a year is because they are where the costs are in the system, the most injured. So CHSWC, WCI, publish your whole poll, just not the cherry pickings. Otherwise, we have to do it on a shoe string, and Survey Monkey is the best tool we have.
Mr Meechan – There is ample opportunity for the public to review many studies published by a variety of credible research organizations. I’m not sure what “tabulations” will do for anyone, and the raw data would be next to useless unless it is structured in such a way as to enable an educated analyst to best mine the data. I’m also not sure what “publishing the whole poll” means; are you asking for individual copies of all questionnaire responses?
Re the Surveymonkey poll, no matter how many responses you get, the results are just not credible – or useful in any way. It’s
Finally, I’d note that you are saying the system is a “disaster”, yet you only focus on less than 1 percent of the claimants in an effort to prove that claim. I will state the obvious: by not paying attention to the 99 percent, you are ignoring the vast majority of California’s work comp claimants. You say the system is a disaster, but are basing that judgment on your personal view into a handful of claimants – ones who seek an attorney. It stands to reason that individuals who retain an attorney are less than satisfied; that’s a self-selected group.
You are dealing in anecdote; CWCI, WCRI, and RAND are dealing with data.
There’s a big difference, and only by looking at system-wide data can you assess the “system”.
Changes in the system have always been driven by costs to the employer. Costs are driven by serious cases. Some 50 % of workers compensation cases are first aid only, minor care, and no time loss. These people would have had the exact same outcome no matter what the rules of workers compensation said. Data on these cases tell us little or nothing about how the system works or how it serves the people it was meant to serve. The cost to the system are from the seriously injured. The costs come from people loosing substantial time from work. When you look at measure imposed to save costs, shouldn’t you look at the people it impacts the most. If you know of such studies, please let me know.