Health and Medical News and Resources

General interest items edited by Janice Flahiff

[Reblog] The Smoking Gun: How U.S. Health Care Came to Cost Insanely More

From the 20 May 2015 post at The Health Care Blog

Screen Shot 2015-05-23 at 5.40.35 AM

Health care that costs more than it needs to is not just an annoyance; it’s a big factor in income inequality in the United States. The financial, physical and emotional burden of disease are major drivers of poverty. At the same time, the high cost of health care even after the Affordable Care Act means that many people don’t access it when they need it, and this in turn deprives large swathes of the population of their true economic potential as entrepreneurs, workers and consumers. People who are burdened by disease and mental illness don’t start businesses; don’t show up for work; and don’t spend as much money on cars, smartphones and cool apartments. Unnecessary sickness is a burden to the whole economy.

How did we get this way? What was the mechanism that differentiated U.S. health care from all other advanced countries? The usual suspects (such as “We have the most sophisticated research and teaching hospitals,” or “It’s the for-profit health insurers” or “Doctors make too much”) all fail when we compare the United States with other sophisticated national systems such as those in Germany and France. Other countries have all of these factors in varying amounts — private health insurers, world-class research, well-paid physicians — and cost a lot, but still spend a far smaller chunk of their economy on health care. Blame has been leveled in every direction, but in reality no single part of health care has been the driver. The whole system has become drastically more expensive over the last three decades.

What’s the Mechanism?

Since the difference between the United States and other countries is so large and obvious, there should be some way we can look at health care spending that would make that mechanism jump out at us. And there is a way.

That first big leap is between 1982 and 1983. What was different in 1983 that was not there in 1982? DRGs, diagnosis-related groups — the first attempt by the government to control health care costs by attaching a code to each item, each type of case, each test or procedure, and assigning a price it would pay in each of the hundreds of markets across the country. The rises continue across subsequent years as versions of this code-based reimbursement system expand it from Medicare and Medicaid to private payers, from inpatient to ambulatory care, from hospitals to physician groups and clinics, to devices and supplies, eventually becoming the default system for paying for nearly all of U.S. health care: code-driven fee-for-service reimbursements.

Cost Control Drives Costs Up?

How can a cost control scheme drive costs up? In a number of ways: In an attempt to control the costs of the system, the DRG rubric controlled the costs of units, from individual items like an aspirin or an arm sling to the most comprehensive items such as an operation or procedure. The system did not pay for an entire clinical case across the continuum of care from diagnosis through rehab; or for an entire patient per year on a capitated basis, which would capture the economic advantages of prevention; or for an entire population. While it is more cost-effective (as well as better medicine) to provide a diabetes patient with medical management, in-home nursing visits and nutritional counseling rather than, say, waiting until the patient needs an amputation, the coding system actually punished that efficiency and effectiveness. Under this system, we got paid for our inefficiencies, and even for our mistakes: Do-overs would often drop far more to the bottom line than the original procedure did.

The system punished, rather than rewarded, spending more time with patients, trying to help patients before their problems became acute, or maintaining a long-term, trusted relationship with patients. Under a code-driven fee-for-service system, getting serious about prevention and population health management would be a broad road to bankruptcy.

If extra items were deemed necessary (an extra test or scan, say), there were codes for that, and reimbursements awaiting. In so doing, the system rewarded doing more (“volume”) rather than whatever would be the best, most appropriate, most efficient treatment path (“value”). It provided a written, detailed catalog of reimbursements which rewarded diagnoses of greater complexity, rewarded new techniques and technologies with new and usually higher reimbursements, and especially rewarded systems that invested in a greater capability to navigate the coding system. At the same time, the reimbursements were constantly open to pressure from the industry. Each part of the industry, each region, each specialty, each part of the device industry, became fiercely focused on keeping those reimbursements up, and getting new codes for more costly procedures.

The business and strategic side of health care became a matter of making money by farming the coding system. Do more of what gets better reimbursement, less of what does not. Make sure every item gets a code and gets charged for. The codes became a manual for success, a handbook for empire.

The Smoking Gun

The smoking gun is right there in the chart, at the big split between the trajectories of the United States and other countries. And today, at this moment, the code-based fee-for-service payment system is still by far the basis of the majority of all revenue streams across health care.

The unifying factor between multiple new strategies unfolding in health care right now, including patient-centered medical homes, pay for performance, bundled prices, reference prices, accountable care organizations, direct pay primary care and others, is to find some way around the strict code-based fee-for-service system, either by avoiding it entirely or by adding epicycles and feedback loops to it to counter its most deleterious effects.

There is no perfect way to pay for health care. All payment methods have their drawbacks and unintended consequences. But the code-based fee-for-service system got us here, and any path out of the cost mess we are in has to get us off that escalator one way or another.

May 23, 2015 Posted by | health care | , , , , , , | Leave a comment

[Reblog] Value-Based Care’s Data Problem

From the 22 May 2015 post at The Healthcare Blog

I believe the concept of value-based care is good for healthcare. VBC encourages providers to make changes that put the patient at the center of care, so that different services can be provided across providers in a collaborative way. If all went according to the VBC vision, there would be fewer redundant tests, more emphasis on preventative care, and an effort to keep high-risk patients out of the emergency room. It’s also better for costs, something we desperately need in the US, where healthcare spending per capita is more than twice the OECD average.

But Lisa’s story, at the leading edge of the value-based experiment, is not good at all. ACOs and most other value-based models are new, constantly changing, and unproven. ACOs report on 33 metrics that are supposed to represent the quality of care provided by their networks of providers. While still extremely limited in scope, any more than 33 metrics would have made Lisa’s job impossible. So far, few ACOs have reported any savings. Worse — the metrics are unproven. What if they overemphasize standardized process over patient outcomes? And what if efficiency measures result in neglectful and impersonal care? A lot is riding on Lisa’s testing ground.

Screen Shot 2015-05-23 at 5.34.50 AM


The administrative challenge

By engaging with and learning from people like Lisa, I have begun to understand the problems frontier administrators face — the same problems countless others will face if we don’t address the administrative burden early on. Here are a few of the top headaches being rolled out in the name of value:

Selecting metrics

For ACOs, 33 metrics are tracked today. Inevitably, these will expand and change as accountable care evolves. There are also countless other systems of metrics encouraged by other incentive programs: the Physician Quality Reporting System measures, Meaningful Use metrics, Agency for Healthcare Research and Quality Indicators, the Consumer Assessment of Healthcare Providers and Systems for patient experience metrics, indicators for each specialty (Stroke and Stroke Rehabilitation Physician Performance Measurement Set, Endoscopy and Polyp Surveillance Physician Performance Measurement Set, and the Heart Failure Performance Measurement Set, to name a few). The document outlining protocols for the Physician Quality Reporting System is 18 pages long, with a mouthful of a title to match: “The 2015 Physician Quality Reporting System (PQRS) Measure-Applicability Validation (MAV) Process for Claims-Based Reporting of Individual Measures.” Got that? A new piece of legislation that passed the House of Representatives last week — the “doc fix” bill — is about to revamp many of these requirements once again.

Collecting data

Lisa had to fumble through different electronic systems and paper charts to extract the relevant data for each patient in her panel at dozens of different clinics. In many cases, it was clear that care had been provided (e.g. an unstable patient had been upgraded from a cane to a walker), but the documentation wasn’t there (to fulfill the “Screening for Future Fall Risk” metric, documentation must state whether the patient had no falls, one fall without major injury, two or more falls, or any fall with major injury.) Therefore, even though care was provided to prevent future falls, the documentation did not meet the CMS requirement and no credit was given.

For the next reporting year, Lisa is designing her own reporting mechanisms for clinics and doctors. She says that her first reporting experience “was invaluable in learning ways to improve the reporting for year 2015 and beyond,” and she is putting processes in place to facilitate reporting next year. But each clinic is different: some need a page at the front of their paper chart with check boxes, and some have templates in their electronic health records. Her new processes may improve the situation, but additional tracking could also cut into time doctors spend with patients and add to the squeeze they already feel.

Integrating data

Lisa integrated all the data from each clinic manually, and this is a problem for small institutions who are trying to communicate and coordinate with each other. Right now it takes a long time and is not very scalable. Even at larger institutions with leading electronic health record systems, the data is locked away within proprietary databases, often in incompatible formats. Clinical data is rarely integrated with financial and patient-reported data in the way required to tie outcomes and claims to reimbursements in a value-based model.

Reporting

After all of her data collection, Lisa still had to submit her data to a third part to produce reports, and she will wait many months for the results. The CMS websites are comically complex ; the instruction manual for using the CMS metric reporting interface is 127 pages long.


Putting patients at the center

If these problems aren’t addressed, we’re in for a long and painful healthcare reform. Administrative costs will continue to rise, along with another generation of frustrated physicians and admins. Moreover, value-based care could be deemed a failure not because it’s a bad idea but because of poor implementation. Instead of putting patients at the center of care, it could breed more bureaucracy and force doctors to spend more time reporting on metrics and less time with patients.

We can address these issues and we must — to give value-based care a chance at moving the US toward more patient-centered, less exorbitant healthcare.

 

May 23, 2015 Posted by | health care | , , , , , , , , , | Leave a comment

[News release] Credibility of Evidence: A Reconsideration of the Logic and Strength of Our Healthcare Decisions

From the 22 May 2015 HealthCare Blog post

A few days ago, we wrote an editorial for US News and World Reports on the scant or dubious evidence used to support some healthcare policies (the editorial is reproduced in full below).  In that case, we focused on studies and CMS statements about a select group of Accountable Care Organizations and their cost savings. Our larger point however is about the need to reconsider the evidence we use for all healthcare-related decisions and policies. We argue that an understanding of research design and the realities of measurement in complex settings should make us both skeptical and humbled.  Let’s focus on two consistent distortions.

Screen Shot 2015-05-23 at 5.27.07 AM

Evidence-based Medicine (EBM).  Few are opposed to evidence-based medicine.  What’s the alternative? Ignorance-based medicine? Hunches?  However, the real world applicability of evidence-based medicine (EBM) is frequently overstated. Our ideal research model is the randomized controlled trial, where studies are conducted with carefully selected samples of patients to observe the effects of the medicine or treatment without additional interference from other conditions. Unfortunately, this model differs from actual medical practice because hospitals and doctors’ waiting rooms are full of elderly patients suffering from several co-morbidities and taking about  12 to 14 medications, (some unknown to us). It is often a great leap to apply findings from a study under “ideal conditions” to the fragile patient. So wise physicians balance the “scientific findings” with the several vulnerabilities and other factors of real patients.  Clinicians are obliged to constantly deal with these messy tradeoffs, and the utility of evidence-based findings is mitigated by the complex challenges of the sick patients, multiple medications taken, and massive unknowns. This mix of research with the messy reality of medical and hospital practice means that evidence, even if available, is often not fully applicable. 

Relative vs. Absolute Drug Efficacy:

Let’s talk a tiny bit about arithmetic. Say we have a medication (called X) that works satisfactorily for 16 out of a hundred cases, i.e., 16% of the time.  Not great, but not atypical of many medications.  Say then that another drug company has another medication (called “Newbe”) that works satisfactorily 19% of the time. Not a dramatic improvement, but a tad more helpful (ignoring how well it works, how much it costs, and if there are worse side effects).  But what does the advertisement for drug “Newbe” say?   That “Newbe” is almost 20% better than drug “X.” Honest. And it’s not a total lie.  Three percent (the difference between 16% and 19%) is 18.75%, close enough to 20% to make the claim legit. Now, if “Newbe” were advertised as 3% better (but a lot more expensive) sales would probably not skyrocket. But at close to 20% better, who could resist?   

Policy:  So what does this have to do with healthcare policy?  We also want evidence of efficacy with healthcare policies but it turns out that evaluation of these interventions and policies is often harder to do well than are studies of drugs. Interventions and policies are introduced into messy pluralistic systems, with imprecise measures of quality and costs, with sick and not-so-sick patients, with differing resources and populations, with a range of payment systems, and so on and so on. Sometimes, randomized controlled trials are impossible.  But sometimes they are possible but difficult to effect. Nevertheless, we argue they are usually worth the effort. Considering the billions or trillions of dollars involved in some policies (e.g., Medicare changes, insurance rules) the cost is comparatively trivial.

But there’s another question: What if a decent research design is used to measure the effects of a large policy in a select population but all you get is a tiny “effect?”  What do we know? What should policymakers do? Here’s what we wrote in our recent editorial in the US News and World Report….

 

May 23, 2015 Posted by | health care | , , , , , , , , , , , | Leave a comment

[News release] Exposure of US population to extreme heat could quadruple by mid-century

From the 18 May 2015 EurkAlert

Interaction of warming climate with a growing, shifting population could subject more people to sweltering conditions

NATIONAL CENTER FOR ATMOSPHERIC RESEARCH/UNIVERSITY CORPORATION FOR ATMOSPHERIC RESEARCH

IMAGE
IMAGE: THIS GRAPHIC ILLUSTRATES THE EXPECTED INCREASE IN AVERAGE ANNUAL PERSON-DAYS OF EXPOSURE TO EXTREME HEAT FOR EACH US CENSUS DIVISION WHEN COMPARING THE PERIOD 1971-2000 TO THE PERIOD 2041-2070. PERSON-DAYS… view more

CREDIT: ©UCAR.

BOULDER – U.S. residents’ exposure to extreme heat could increase four- to six-fold by mid-century, due to both a warming climate and a population that’s growing especially fast in the hottest regions of the country, according to new research.

The study, by researchers at the National Center for Atmospheric Research (NCAR) and the City University of New York (CUNY), highlights the importance of considering societal changes when trying to determine future climate impacts.

“Both population change and climate change matter,” said NCAR scientist Brian O’Neill, one of the study’s co-authors. “If you want to know how heat waves will affect health in the future, you have to consider both.”

Extreme heat kills more people in the United States than any other weather-related event, and scientists generally expect the number of deadly heat waves to increase as the climate warms. The new study, published May 18 in the journal Nature Climate Change, finds that the overall exposure of Americans to these future heat waves would be vastly underestimated if the role of population changes were ignored.

The total number of people exposed to extreme heat is expected to increase the most in cities across the country’s southern reaches, including Atlanta, Charlotte, Dallas, Houston, Oklahoma City, Phoenix, Tampa, and San Antonio.

he average annual exposure to extreme heat in the United States during the study period is expected to be between 10 and 14 billion person-days, compared to an annual average of 2.3 billion person-days between 1971 and 2000.

Of that increase, roughly a third is due solely to the warming climate (the increase in exposure to extreme heat that would be expected even if the population remained unchanged). Another third is due solely to population change (the increase in exposure that would be expected if climate remained unchanged but the population continued to grow and people continued to moved to warmer places). The final third is due to the interaction between the two (the increase in exposure expected because the population is growing fastest in places that are also getting hotter).

“We asked, ‘Where are the people moving? Where are the climate hot spots? How do those two things interact?'” said NCAR scientist Linda Mearns, also a study co-author. “When we looked at the country as a whole, we found that each factor had relatively equal effect.”

At a regional scale, the picture is different. In some areas of the country, climate change packs a bigger punch than population growth and vice versa.

For example, in the U.S. Mountain region–defined by the Census Bureau as the area stretching from Montana and Idaho south to Arizona and New Mexico–the impact of a growing population significantly outstrips the impact of a warming climate. But the opposite is true in the South Atlantic region, which encompasses the area from West Virginia and Maryland south through Florida.

…..

Exposure vs. vulnerability

Regardless of the relative role that population or climate plays, some increase in total exposure to extreme heat is expected in every region of the continental United States. Even so, the study authors caution that exposure is not necessarily the same thing as vulnerability.

“Our study does not say how vulnerable or not people might be in the future,” O’Neill said. “We show that heat exposure will go up, but we don’t know how many of the people exposed will or won’t have air conditioners or easy access to public health centers, for example.”

May 23, 2015 Posted by | Public Health | , , , | Leave a comment

[News release] Plant chemical could prevent tooth decay

From a May 2015 University of Edinburg news release

Oral care products containing a natural chemical that stops bacteria harming teeth could help fight decay, research shows.

The plant natural product acts against harmful mouth bacteria and could improve oral health by helping to prevent the build-up of plaque.

The compound – known as trans-chalcone – is related to chemicals found in liquorice root.

Oral bacteria

This exciting discovery highlights the potential of this class of natural products in food and healthcare technologies.

The University study shows that it blocks the action of a key enzyme that allows the bacteria to thrive in oral cavities.

The bacteria – Streptococcus mutans – metabolise sugars from food and drink, which produces a mild acid and leads to the formation of plaque.

Without good dental hygiene, the combination of plaque and mouth acid can lead to tooth decay.

Preventing biofilms

Researchers found that blocking the activity of the enzyme prevents bacteria forming a protective biological layer – known as a biofilm – around themselves.

Plaque is formed when bacteria attach themselves to teeth and construct biofilms.

Preventing the assembly of these protective layers would help stop bacteria forming plaque.

Oral care products that contain similar natural compounds could help people improve their dental hygiene.

Blocking enzyme function

The study is the first to show how trans-chalcone prevents bacteria forming biofilms.

The team worked out the 3D structure of the enzyme – called Sortase A – which allows the bacteria to make biofilms.

By doing this, researchers were able to identify how trans-chalcone prevents the enzyme from functioning.

The study, published in the journal Chemical Communications, was supported by Wm. Wrigley Jr. Company.

We were delighted to observe that trans-chalcone inhibited Sortase A in a test tube and stopped Streptococcus mutans biofilm formation. We are expanding our study to include similar natural products and investigate if they can be incorporated into consumer products.

Dr Dominic Campopiano

School of Chemistry

May 23, 2015 Posted by | Medical and Health Research News | , , , , , | Leave a comment

Seven projects to make progress on ethics and global food security in five years | EurekAlert! Science News

Seven projects to make progress on ethics and global food security in five years | EurekAlert! Science News. (21 May 2015)

Screen Shot 2015-05-23 at 5.00.59 AM

Making yet another commitment to go meatless 2 days/week.  Also again thinking about a year round smallish green house.
Like the idea of farmers/agricultural sector as being redefined as service industries – accent on social responsibility.

PBS had an eye opener on senior hunger, including Naples Florida.  Some were in gated communities – seems unexpected events (as major medical bills) made it impossible to afford the basics, including nutritious food.

Here’s the link to the segment –>    http://video.pbs.org/viralplayer/2365496095

May 23, 2015 Posted by | Nutrition | , , , | Leave a comment

   

%d bloggers like this: