April 29, 2026
When Bad Data Looks Like Bad Intent: The Real Fight Over Causation in Healthcare Compliance
- by Sean Weiss, Partner & VP of Strategic Litigation Services
In healthcare enforcement, the most dangerous mistake is also one of the most common: treating alarming data as though it were the same thing as a complete explanation.
That is the real issue here.
On one side of the equation is a familiar and entirely legitimate regulatory instinct. When claims data shows impossible days, service overlaps, billing during periods of incarceration, services after a patient’s death, or notes that appear inconsistent with the claims that were submitted, those patterns are not trivial. They are serious. They are exactly the kinds of indicators that should trigger scrutiny under federal and state law(s). They are the kinds of indicators that can justify aggressive administrative action, including suspension, overpayment review, exclusion activity, and civil monetary penalty exposure within the federal or state program-integrity framework.
From that perspective, the case for the enforcement view is not difficult to understand.
Healthcare programs cannot wait for perfect information before acting. If data reflects patterns that suggest the program may be paying claims that should never have been paid, regulators are expected to intervene. They are expected to protect the integrity of the Medicare, Medicaid and other federal payor programs. They are expected to stop the bleeding first and sort out the details through process. That is not overreach. That is program integrity.
And when the concerns are not limited to a single billing edit, but instead span multiple categories, the enforcement narrative becomes even stronger. A record that includes alleged daily hour impossibilities, setting-based overlaps, enrollment conflicts, note-integrity concerns, and overpayment issues will always be difficult for any provider organization to dismiss as random noise. That kind of pattern does not merely invite questions. It demands them.
But that is only half of the story, and stopping there would be a profound compliance mistake.
The countervailing view is not that troubling data should be ignored. It is that troubling data should be investigated correctly.
That distinction matters.
A payment suspension under 42 C.F.R. § 455.23 is an interim administrative safeguard. It is not a final root-cause determination. A claims spreadsheet is an analytic screen. It is not a complete forensic reconstruction of what happened inside a provider’s operations, documentation systems, clearinghouse workflow, software configuration, or claim-submission logic.
That is where cases become far more interesting and far more important than a simple fraud-versus-no-fraud narrative.
The central methodological question is whether claims analytics, standing alone, can reliably tell us why the pattern occurred.
In my view, the answer is no.
Claims data can show concentration. It can show spikes. It can show overlaps. It can show volumes that look facially impossible. What it cannot do, by itself, is distinguish among materially different causes. It cannot reliably tell us whether the pattern was driven by intentional misconduct, poor internal controls, weak supervision, documentation failure, bad training, claim duplication, rendering-provider attribution errors, place-of-service defects, clearinghouse behavior, or software mapping logic that contaminated the claims stream before the data was ever analyzed.
That is not a minor point. It is the point.
If services furnished by multiple individuals are aggregated under one identifier, utilization reports can become grotesquely inflated. If location fields are omitted or mapped inconsistently, overlap analytics can produce a distorted picture of where the service supposedly occurred. If corrected claims, batch posting, or replacement transactions are not properly reconciled, an “impossible day” may look self-evident on paper even though the underlying operational reality is more complicated.
This is why experienced auditors do not stop at the spreadsheet.
They review medical records. They review scheduling records. They review treatment plans, service logs, encounter metadata, eligibility files, admission and discharge information, remittance history, clearinghouse activity, user logs, and change histories. In other words, they perform triangulation. They compare the claims universe, the clinical record, and the technology trail.
Without that triangulation, a reviewer may be looking at the symptom rather than the mechanism.
And that brings us to one of the most consequential features of the affidavit: the acknowledgment that medical-record review is necessary to fully understand what an audit actually shows. That concession is not procedural window dressing. It is a professional admission that billing data alone does not complete the analysis. It confirms what every seasoned compliance officer already knows: a serious billing concern may be real, but the cause of that concern still has to be proven.
That is where the merits of the defense-oriented position are strongest.
Not because bad data should be excused.
Not because poor documentation should be minimized.
Not because software should become a universal alibi.
But because methodology matters, and it matters most when the consequences are severe.
There is another feature of the record that should not be overlooked: some concerns get narrowed or rescinded while others remain in place during a review, and this is significant. It demonstrates that early pattern detection can change when additional information is reviewed. In compliance terms, that is exactly what one would expect in a complex case. Preliminary analytics cast a wide net. Deeper review refines the picture. Some concerns harden. Others weaken. That is not evidence of system failure. It is evidence that causation requires disciplined follow-through.
So where does that leave a case on merits? It leaves it in a place that sophisticated healthcare lawyers and regulators should recognize immediately. The enforcement case is strongest to the extent it rests on the breadth and seriousness of the billing indicators. Multiple categories of irregularities, especially those that implicate note integrity and program eligibility, are not easily brushed aside. Regulators have every right to treat those patterns as dangerous.
The opposing case is strongest to the extent it insists that no one should confuse detection with explanation. A bad pattern can be real without the initial explanation being complete. An agency can be justified in acting without that action resolving whether software logic, provider-mapping defects, workflow design, or other operational failures contributed to the pattern, magnified it, or misattributed it.
That is why every case matters beyond its own facts. If the lesson drawn from it is that ugly data always equals proven intent, compliance methodology will deteriorate into analytics absolutism. If the lesson is that software allegations automatically neutralize dangerous billing patterns, program integrity will collapse into excuse-making. Both outcomes are wrong.
The right lesson is harder and more disciplined. In healthcare compliance, patterns matter. Records matter. System mechanics matter. Workflow matters. And when the stakes involve suspension, overpayment exposure, exclusion risk, and accusations that can permanently alter careers and organizations, a responsible conclusion must be built on all of them.
That is not leniency.
That is rigor.