Analysis: First Coast Reports Error Rate Differences

A Medicare Administrative Contractor (MAC) posted hugely different findings for four common billing errors previously reported by CMS’s Medicare Recovery Audit Contractor (RAC) for the state of Florida on March 19.

First Coast Service Options (FCSO) conducted four so-called widespread “probes” on four MS-DRG groups, in an apparent attempt to “validate” recent findings by Connolly Healthcare, who held the contract for RAC operations in Region C and including Florida.

The MS-DRG groups were:

  • MS-DRG 074 Cranial & peripheral nerve disorders w/o MCC
  • MS-DRG 092 Other disorders of nervous system w/CC
  • MS-DRG 419 Laparoscopic cholecystectomy w/o C.D.E. w/o CC/MCC
  • MS-DRG 491 Back & neck procedure except spinal fusion w/o CC/MCC

The error rates found by FCSO were hugely different from those found by Connolly, according the numbers posted by FCSO:

MS-DRG Reviewed Connolly Error Rate FCSO Error Rate
074 89.87 % 7.77 %
092 14.29 % 6.49 %
419 91.55 % 2.74 %
491 91.98 % 23.0 %

My experience with these kind of denials is that they are “low-hanging fruit” to the RACs, as the greater majority of them will have very few additional diagnoses – and corresponding ICD-9 codes – which often means that documentation is rather sparse. That makes it an easier target for a medical necessity denial.

What Does This Report Mean?

There are those who will immediately exclaim, “SEE!  This proves the RACs are wrong more than 90% of the time!”  They would be wrong. To be frank, it proves nothing of the sort.

While the report seems to show how the RACs are often “wrong,” all it really shows is that FCSO gets different results when they conduct reviews of “similar” claims. To claim that the RACs are wrong, you must accept that the MAC is correct. So really, which one is “correct”?

Here’s a simple example that should illustrate how to approach a situation where you have two contradictory reports…

How To View Differing Results

Let’s suppose you have two people measure the height of a building:

  • One person measures the building and reports it’s height as 60 feet.
  • A second person measures the same building and reports it’s height as 50 feet.

If there was no change to the building between the two readings, then both reports cannot be true.

Most would conclude that one or the other report is incorrect. But there are other possible answers to this conflict:

  • both answers could be wrong, or
  • one or both of the “systems” doing the measurement and/or reporting is faulty, or
  • maybe we don’t even understand “height” and how to measure it correctly.

Also – consider what you would think if the difference between the reports was only 1 foot, not 10 feet – now what do you think?

So… what to think of this report, that so plainly reports a huge conflict between what the RAC reported and what the MAC found instead?

My point is – both cannot be true, but both could be false.

And… there are motives hidden in every report you’ll ever see, anywhere, anytime. If you want to suggest that somebody else’s report proves something for you, you’d be wise to dig into the report and apply some critical thinking before offering it as evidence to support your cause.

 

Differing Motives Produce Differing Results

Here’s my thinking – in 5 bullets:

1 – First, I’d like to know sample sizes and the source of the RAC figures. Are we really comparing apples to apples? I doubt it. Therefore, all the results shown are dubious.

2 – The last example shown on the FCSO page gives me pause about their methodology:

  • “Twenty-one providers were included in this widespread probe…”
    – How were these providers chosen? Are they at all similar or different, in size, # of Discharges, etc?
  • “…an average sample size of 3-5 claims.”
    – Wait… if you’re giving an “average” why can’t you give a real number, instead of a RANGE?
  • “Error rates for these providers ranged from 0 to 66 percent…”
    – Oh, they had to give that MAX number – it makes providers look the worst.
  • “…with an overall error rate of 23 percent.”
    – Is this is an Average? Or is it a Mean? Or what does this mean? And what conclusion are we supposed to draw from that number?

3 – I’d love to see the RAW data, so I can draw more appropriate conclusions. Of course, that would mean they’d have to answer some questions about how they pulled the data, and that might invalidate all the data, so maybe that’s why they won’t disclose the raw data.

4 – All this proves is that neither the RACs nor FCSO is capable of generating real, honest, useful reports. Ok, I’m maybe not being fair to Connolly or the other RACs, since perhaps they didn’t have any part in assembling this report. (So what.)

5 – This is perhaps all evidence to support a case for how different motivations will produce different results from reviewers:

Given:
– MACs are not incentivized to find errors.
– RACs are highly incentivized to find errors.
– MANY “Errors” are identified via subjective criteria.

Therefore, – MANY MORE “Errors” are identified when reviewers are highly incentivized, and allowed to use subjective criteria, without recourse.

And… if you are CMS, substitute “Fraud” for the word “Error” above…

Remember, the program can easily be shown to be about Money, NOT about “proper billing,” and absolutely NOT about the care of beneficiaries.

Related Posts Plugin for WordPress, Blogger...
This entry was posted in Academy Blog. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *