Health care that costs more than it needs to is not just an annoyance; it’s a big factor in income inequality in the United States. The financial, physical and emotional burden of disease are major drivers of poverty. At the same time, the high cost of health care even after the Affordable Care Act means that many people don’t access it when they need it, and this in turn deprives large swathes of the population of their true economic potential as entrepreneurs, workers and consumers. People who are burdened by disease and mental illness don’t start businesses; don’t show up for work; and don’t spend as much money on cars, smartphones and cool apartments. Unnecessary sickness is a burden to the whole economy.
How did we get this way? What was the mechanism that differentiated U.S. health care from all other advanced countries? The usual suspects (such as “We have the most sophisticated research and teaching hospitals,” or “It’s the for-profit health insurers” or “Doctors make too much”) all fail when we compare the United States with other sophisticated national systems such as those in Germany and France. Other countries have all of these factors in varying amounts — private health insurers, world-class research, well-paid physicians — and cost a lot, but still spend a far smaller chunk of their economy on health care. Blame has been leveled in every direction, but in reality no single part of health care has been the driver. The whole system has become drastically more expensive over the last three decades.
What’s the Mechanism?
Since the difference between the United States and other countries is so large and obvious, there should be some way we can look at health care spending that would make that mechanism jump out at us. And there is a way.
That first big leap is between 1982 and 1983. What was different in 1983 that was not there in 1982? DRGs, diagnosis-related groups — the first attempt by the government to control health care costs by attaching a code to each item, each type of case, each test or procedure, and assigning a price it would pay in each of the hundreds of markets across the country. The rises continue across subsequent years as versions of this code-based reimbursement system expand it from Medicare and Medicaid to private payers, from inpatient to ambulatory care, from hospitals to physician groups and clinics, to devices and supplies, eventually becoming the default system for paying for nearly all of U.S. health care: code-driven fee-for-service reimbursements.
Cost Control Drives Costs Up?
How can a cost control scheme drive costs up? In a number of ways: In an attempt to control the costs of the system, the DRG rubric controlled the costs of units, from individual items like an aspirin or an arm sling to the most comprehensive items such as an operation or procedure. The system did not pay for an entire clinical case across the continuum of care from diagnosis through rehab; or for an entire patient per year on a capitated basis, which would capture the economic advantages of prevention; or for an entire population. While it is more cost-effective (as well as better medicine) to provide a diabetes patient with medical management, in-home nursing visits and nutritional counseling rather than, say, waiting until the patient needs an amputation, the coding system actually punished that efficiency and effectiveness. Under this system, we got paid for our inefficiencies, and even for our mistakes: Do-overs would often drop far more to the bottom line than the original procedure did.
The system punished, rather than rewarded, spending more time with patients, trying to help patients before their problems became acute, or maintaining a long-term, trusted relationship with patients. Under a code-driven fee-for-service system, getting serious about prevention and population health management would be a broad road to bankruptcy.
If extra items were deemed necessary (an extra test or scan, say), there were codes for that, and reimbursements awaiting. In so doing, the system rewarded doing more (“volume”) rather than whatever would be the best, most appropriate, most efficient treatment path (“value”). It provided a written, detailed catalog of reimbursements which rewarded diagnoses of greater complexity, rewarded new techniques and technologies with new and usually higher reimbursements, and especially rewarded systems that invested in a greater capability to navigate the coding system. At the same time, the reimbursements were constantly open to pressure from the industry. Each part of the industry, each region, each specialty, each part of the device industry, became fiercely focused on keeping those reimbursements up, and getting new codes for more costly procedures.
The business and strategic side of health care became a matter of making money by farming the coding system. Do more of what gets better reimbursement, less of what does not. Make sure every item gets a code and gets charged for. The codes became a manual for success, a handbook for empire.
The Smoking Gun
The smoking gun is right there in the chart, at the big split between the trajectories of the United States and other countries. And today, at this moment, the code-based fee-for-service payment system is still by far the basis of the majority of all revenue streams across health care.
The unifying factor between multiple new strategies unfolding in health care right now, including patient-centered medical homes, pay for performance, bundled prices, reference prices, accountable care organizations, direct pay primary care and others, is to find some way around the strict code-based fee-for-service system, either by avoiding it entirely or by adding epicycles and feedback loops to it to counter its most deleterious effects.
There is no perfect way to pay for health care. All payment methods have their drawbacks and unintended consequences. But the code-based fee-for-service system got us here, and any path out of the cost mess we are in has to get us off that escalator one way or another.
I believe the concept of value-based care is good for healthcare. VBC encourages providers to make changes that put the patient at the center of care, so that different services can be provided across providers in a collaborative way. If all went according to the VBC vision, there would be fewer redundant tests, more emphasis on preventative care, and an effort to keep high-risk patients out of the emergency room. It’s also better for costs, something we desperately need in the US, where healthcare spending per capita is more than twice the OECD average.
But Lisa’s story, at the leading edge of the value-based experiment, is not good at all. ACOs and most other value-based models are new, constantly changing, and unproven. ACOs report on 33 metrics that are supposed to represent the quality of care provided by their networks of providers. While still extremely limited in scope, any more than 33 metrics would have made Lisa’s job impossible. So far, few ACOs have reported any savings. Worse — the metrics are unproven. What if they overemphasize standardized process over patient outcomes? And what if efficiency measures result in neglectful and impersonal care? A lot is riding on Lisa’s testing ground.
The administrative challenge
By engaging with and learning from people like Lisa, I have begun to understand the problems frontier administrators face — the same problems countless others will face if we don’t address the administrative burden early on. Here are a few of the top headaches being rolled out in the name of value:
For ACOs, 33 metrics are tracked today. Inevitably, these will expand and change as accountable care evolves. There are also countless other systems of metrics encouraged by other incentive programs: the Physician Quality Reporting System measures, Meaningful Use metrics, Agency for Healthcare Research and Quality Indicators, the Consumer Assessment of Healthcare Providers and Systems for patient experience metrics, indicators for each specialty (Stroke and Stroke Rehabilitation Physician Performance Measurement Set, Endoscopy and Polyp Surveillance Physician Performance Measurement Set, and the Heart Failure Performance Measurement Set, to name a few). The document outlining protocols for the Physician Quality Reporting System is 18 pages long, with a mouthful of a title to match: “The 2015 Physician Quality Reporting System (PQRS) Measure-Applicability Validation (MAV) Process for Claims-Based Reporting of Individual Measures.” Got that? A new piece of legislation that passed the House of Representatives last week — the “doc fix” bill — is about to revamp many of these requirements once again.
Lisa had to fumble through different electronic systems and paper charts to extract the relevant data for each patient in her panel at dozens of different clinics. In many cases, it was clear that care had been provided (e.g. an unstable patient had been upgraded from a cane to a walker), but the documentation wasn’t there (to fulfill the “Screening for Future Fall Risk” metric, documentation must state whether the patient had no falls, one fall without major injury, two or more falls, or any fall with major injury.) Therefore, even though care was provided to prevent future falls, the documentation did not meet the CMS requirement and no credit was given.
For the next reporting year, Lisa is designing her own reporting mechanisms for clinics and doctors. She says that her first reporting experience “was invaluable in learning ways to improve the reporting for year 2015 and beyond,” and she is putting processes in place to facilitate reporting next year. But each clinic is different: some need a page at the front of their paper chart with check boxes, and some have templates in their electronic health records. Her new processes may improve the situation, but additional tracking could also cut into time doctors spend with patients and add to the squeeze they already feel.
Lisa integrated all the data from each clinic manually, and this is a problem for small institutions who are trying to communicate and coordinate with each other. Right now it takes a long time and is not very scalable. Even at larger institutions with leading electronic health record systems, the data is locked away within proprietary databases, often in incompatible formats. Clinical data is rarely integrated with financial and patient-reported data in the way required to tie outcomes and claims to reimbursements in a value-based model.
After all of her data collection, Lisa still had to submit her data to a third part to produce reports, and she will wait many months for the results. The CMS websites are comically complex ; the instruction manual for using the CMS metric reporting interface is 127 pages long.
Putting patients at the center
If these problems aren’t addressed, we’re in for a long and painful healthcare reform. Administrative costs will continue to rise, along with another generation of frustrated physicians and admins. Moreover, value-based care could be deemed a failure not because it’s a bad idea but because of poor implementation. Instead of putting patients at the center of care, it could breed more bureaucracy and force doctors to spend more time reporting on metrics and less time with patients.
We can address these issues and we must — to give value-based care a chance at moving the US toward more patient-centered, less exorbitant healthcare.
A few days ago, we wrote an editorial for US News and World Reports on the scant or dubious evidence used to support some healthcare policies (the editorial is reproduced in full below).In that case, we focused on studies and CMS statements about a select group of Accountable Care Organizations and their cost savings. Our larger point however is about the need to reconsider the evidence we use for all healthcare-related decisions and policies. We argue that an understanding of research design and the realities of measurement in complex settings should make us both skeptical and humbled.Let’s focus on two consistent distortions.
Evidence-based Medicine (EBM). Few are opposed to evidence-based medicine.What’s the alternative? Ignorance-based medicine? Hunches?However, the real world applicability of evidence-based medicine (EBM) is frequently overstated. Our ideal research model is the randomized controlled trial, where studies are conducted with carefully selected samples of patients to observe the effects of the medicine or treatment without additional interference from other conditions. Unfortunately, this model differs from actual medical practice because hospitals and doctors’ waiting rooms are full of elderly patients suffering from several co-morbidities and taking about12 to 14 medications, (some unknown to us). It is often a great leap to apply findings from a study under “ideal conditions” to the fragile patient. So wise physicians balance the “scientific findings” with the several vulnerabilities and other factors of real patients.Clinicians are obliged to constantly deal with these messy tradeoffs, and the utility of evidence-based findings is mitigated by the complex challenges of the sick patients, multiple medications taken, and massive unknowns. This mix of research with the messy reality of medical and hospital practice means that evidence, even if available, is often not fully applicable.
Relative vs. Absolute Drug Efficacy:
Let’s talk a tiny bit about arithmetic. Say we have a medication (calledX) that works satisfactorily for 16 out of a hundred cases, i.e., 16% of the time.Not great, but not atypical of many medications.Say then that another drug company has another medication (called “Newbe”) that works satisfactorily 19% of the time. Not a dramatic improvement, but a tad more helpful (ignoring how well it works, how much it costs, and if there are worse side effects).But what does the advertisement for drug “Newbe” say? That “Newbe” is almost 20% better than drug “X.” Honest. And it’s not a total lie.Three percent (the difference between 16% and 19%) is 18.75%, close enough to 20% to make the claim legit. Now, if “Newbe” were advertised as 3% better (but a lot more expensive) sales would probably not skyrocket. But at close to 20% better, who could resist?
Policy:So what does this have to do with healthcare policy?We also want evidence of efficacy with healthcare policies but it turns out that evaluation of these interventions and policies is often harder to do well than are studies of drugs. Interventions and policies are introduced into messy pluralistic systems, with imprecise measures of quality and costs, with sick and not-so-sick patients, with differing resources and populations, with a range of payment systems, and so on and so on. Sometimes, randomized controlled trials are impossible.But sometimes they are possible but difficult to effect. Nevertheless, we argue they are usually worth the effort. Considering the billions or trillions of dollars involved in some policies (e.g., Medicare changes, insurance rules) the cost is comparatively trivial.
But there’s another question: What if a decent research design is used to measure the effects of a large policy in a select population but all you get is a tiny “effect?”What do we know? What should policymakers do? Here’s what we wrote in our recent editorial in the US News and World Report….
Interaction of warming climate with a growing, shifting population could subject more people to sweltering conditions
NATIONAL CENTER FOR ATMOSPHERIC RESEARCH/UNIVERSITY CORPORATION FOR ATMOSPHERIC RESEARCH
BOULDER – U.S. residents’ exposure to extreme heat could increase four- to six-fold by mid-century, due to both a warming climate and a population that’s growing especially fast in the hottest regions of the country, according to new research.
The study, by researchers at the National Center for Atmospheric Research (NCAR) and the City University of New York (CUNY), highlights the importance of considering societal changes when trying to determine future climate impacts.
“Both population change and climate change matter,” said NCAR scientist Brian O’Neill, one of the study’s co-authors. “If you want to know how heat waves will affect health in the future, you have to consider both.”
Extreme heat kills more people in the United States than any other weather-related event, and scientists generally expect the number of deadly heat waves to increase as the climate warms. The new study, published May 18 in the journal Nature Climate Change, finds that the overall exposure of Americans to these future heat waves would be vastly underestimated if the role of population changes were ignored.
The total number of people exposed to extreme heat is expected to increase the most in cities across the country’s southern reaches, including Atlanta, Charlotte, Dallas, Houston, Oklahoma City, Phoenix, Tampa, and San Antonio.
he average annual exposure to extreme heat in the United States during the study period is expected to be between 10 and 14 billion person-days, compared to an annual average of 2.3 billion person-days between 1971 and 2000.
Of that increase, roughly a third is due solely to the warming climate (the increase in exposure to extreme heat that would be expected even if the population remained unchanged). Another third is due solely to population change (the increase in exposure that would be expected if climate remained unchanged but the population continued to grow and people continued to moved to warmer places). The final third is due to the interaction between the two (the increase in exposure expected because the population is growing fastest in places that are also getting hotter).
“We asked, ‘Where are the people moving? Where are the climate hot spots? How do those two things interact?'” said NCAR scientist Linda Mearns, also a study co-author. “When we looked at the country as a whole, we found that each factor had relatively equal effect.”
At a regional scale, the picture is different. In some areas of the country, climate change packs a bigger punch than population growth and vice versa.
For example, in the U.S. Mountain region–defined by the Census Bureau as the area stretching from Montana and Idaho south to Arizona and New Mexico–the impact of a growing population significantly outstrips the impact of a warming climate. But the opposite is true in the South Atlantic region, which encompasses the area from West Virginia and Maryland south through Florida.
Exposure vs. vulnerability
Regardless of the relative role that population or climate plays, some increase in total exposure to extreme heat is expected in every region of the continental United States. Even so, the study authors caution that exposure is not necessarily the same thing as vulnerability.
“Our study does not say how vulnerable or not people might be in the future,” O’Neill said. “We show that heat exposure will go up, but we don’t know how many of the people exposed will or won’t have air conditioners or easy access to public health centers, for example.”
Oral care products containing a natural chemical that stops bacteria harming teeth could help fight decay, research shows.
The plant natural product acts against harmful mouth bacteria and could improve oral health by helping to prevent the build-up of plaque.
The compound – known as trans-chalcone – is related to chemicals found in liquorice root.
This exciting discovery highlights the potential of this class of natural products in food and healthcare technologies.
The University study shows that it blocks the action of a key enzyme that allows the bacteria to thrive in oral cavities.
The bacteria – Streptococcus mutans – metabolise sugars from food and drink, which produces a mild acid and leads to the formation of plaque.
Without good dental hygiene, the combination of plaque and mouth acid can lead to tooth decay.
Researchers found that blocking the activity of the enzyme prevents bacteria forming a protective biological layer – known as a biofilm – around themselves.
Plaque is formed when bacteria attach themselves to teeth and construct biofilms.
Preventing the assembly of these protective layers would help stop bacteria forming plaque.
Oral care products that contain similar natural compounds could help people improve their dental hygiene.
Blocking enzyme function
The study is the first to show how trans-chalcone prevents bacteria forming biofilms.
The team worked out the 3D structure of the enzyme – called Sortase A – which allows the bacteria to make biofilms.
By doing this, researchers were able to identify how trans-chalcone prevents the enzyme from functioning.
The study, published in the journal Chemical Communications, was supported by Wm. Wrigley Jr. Company.
We were delighted to observe that trans-chalcone inhibited Sortase A in a test tube and stopped Streptococcus mutans biofilm formation. We are expanding our study to include similar natural products and investigate if they can be incorporated into consumer products.
Making yet another commitment to go meatless 2 days/week. Also again thinking about a year round smallish green house.
Like the idea of farmers/agricultural sector as being redefined as service industries – accent on social responsibility.
But when children’s temper tantrums or mood swings are beyond the norm, or they are overwhelmed by homework organization, do parents speak up?
Today’s University of Michigan C.S. Mott Children’s Hospital National Poll on Children’s Health finds that many parents of children age 5-17 wouldn’t discuss behavioral or emotional issues that could be signs of potential health problems with their doctors. While more than 60 percent of parents definitely would talk to the doctor if their child was extremely sad for more than a month, only half would discuss temper tantrums that seemed worse than peers or if their child seemed more worried or anxious than normal. Just 37 percent would tell the doctor if their child had trouble organizing homework.
The most common reason for not sharing these details with their children’s doctors? Nearly half of parents believed that these simply were not medical problems. Another 40 percent of parents say they would rather handle it themselves and about 30 percent would rather speak to someone other than a doctor.
“Behavioral health and emotional health are closely tied to a child’s physical health, well-being and development, but our findings suggest that we are often missing the boat in catching issues early,” says Sarah J. Clark, M.P.H., associate director of the National Poll on Children’s Health and associate research scientist in the University of Michigan Department of Pediatrics.
Study finds doctors must make great effort to provide patients more useful information to help them make medical choices
Patients faced with a choice of surgical options want to engage their physicians and take a more active role in decision-making, according to a study (abstract 567) released at Digestive Disease Week® (DDW) 2015. Further, those physicians must provide better support tools to help patients participate in the decision-making process. The study found that patients consult multiple sources (Internet, family, friends, doctors, etc.) and say that while doctors provide the most believable information, it was also the least helpful.
New research finds that misdiagnoses lead to increased risk of incorrect antibiotic use, threatening patient outcomes and antimicrobial efficacy, while increasing healthcare costs. The study was published online today in Infection Control & Hospital Epidemiology, the journal of the Society for Healthcare Epidemiology of America.
“Antibiotic therapies are used for approximately 56 percent of inpatients in U.S. hospitals, but are found to be inappropriate in nearly half of these cases, and many of these failures are connected with inaccurate diagnoses,” said Greg Filice, MD, lead author of the study. “The findings suggest that antimicrobial stewardship programs could be more impactful if they were designed to help providers make accurate initial diagnoses and to know when antibiotics can be safely withheld.”
Additionally, researchers found that overall, only 58 percent of patients received a correct diagnosis, indicating that diagnostic errors were more prevalent in this study than in previous studies unrelated to antimicrobial use. The most common incorrect diagnoses identified by researchers were pneumonia, cystitis, urinary tract infections, kidney infections and urosepsis.
Contributing factors which the researchers said may lead to inaccurate diagnosis and inappropriate antibiotic use include:
Healthcare workers (HCWs) relying on intuitive processes, instead of analytical processes which are more reliable, safe and effective.
HCWs experiencing fatigue, sleep deprivation and/or cognitive overload more prevalent in inpatient settings.
HCWs receiving patients with a previous diagnosis from another provider.
Lack of clinical experience and minimal personal experience with adverse drug effects.
People have tried to print graphene before,” Shah said. “But it’s been a mostly polymer composite with graphene making up less than 20 percent of the volume.”
With a volume so meager, those inks are unable to maintain many of graphene’s celebrated properties. But adding higher volumes of graphene flakes to the mix in these ink systems typically results in printed structures too brittle and fragile to manipulate. Shah’s ink is the best of both worlds. At 60-70 percent graphene, it preserves the material’s unique properties, including its electrical conductivity. And it’s flexible and robust enough to print robust macroscopic structures. The ink’s secret lies in its formulation: the graphene flakes are mixed with a biocompatible elastomer and quickly evaporating solvents.
“It’s a liquid ink,” Shah explained. “After the ink is extruded, one of the solvents in the system evaporates right away, causing the structure to solidify nearly instantly. The presence of the other solvents and the interaction with the specific polymer binder chosen also has a significant contribution to its resulting flexibility and properties. Because it holds its shape, we are able to build larger, well-defined objects.”
Supported by a Google Gift and a McCormick Research Catalyst Award, the research is described in the paper “Three-dimensional printing of high-content graphene scaffolds for electronic and biomedical applications,” published in the April 2015 issue of ACS Nano. Jakus is the paper’s first author. Mark Hersam, the Bette and Neison Harris Chair in Teaching Excellence, professor of materials science and engineering at McCormick, served as coauthor.
The 3-D printed graphene scaffold appeared on the cover of ACS Nano.
An expert in biomaterials, Shah said 3-D printed graphene scaffolds could play a role in tissue engineering and regenerative medicine as well as in electronic devices. Her team populated one of the scaffolds with stem cells to surprising results. Not only did the cells survive, they divided, proliferated, and morphed into neuron-like cells.
“That’s without any additional growth factors or signaling that people usually have to use to induce differentiation into neuron-like cells,” Shah said. “If we could just use a material without needing to incorporate other more expensive or complex agents, that would be ideal.”
The printed graphene structure is also flexible and strong enough to be easily sutured to existing tissues, so it could be used for biodegradable sensors and medical implants. Shah said the biocompatible elastomer and graphene’s electrical conductivity most likely contributed to the scaffold’s biological success.
“Cells conduct electricity inherently — especially neurons,” Shah said. “So if they’re on a substrate that can help conduct that signal, they’re able to communicate over wider distances.”
Law would give family members and law enforcement tool to temporarily remove guns from someone believed dangerous
Gun violence restraining orders (GVROs) are a promising strategy for reducing firearm homicide and suicide in the United States, and should be considered by states seeking to address gun violence, researchers from the Johns Hopkins Center for Gun Policy and Research at the Johns Hopkins Bloomberg School of Public Health and the University of California, Davis, argue in a new report.
The article is being published online in Behavioral Sciences and the Law on May 20.
GVROs allow family members and intimate partners who believe a relative’s dangerous behavior may lead to violence to request an order from a civil court authorizing law enforcement to remove any guns in the individual’s possession, and to prohibit new gun purchases for the duration of the order. Three states have laws that authorize law enforcement to remove guns from someone identified as dangerous: Indiana, Connecticut and Texas. In 2014, California became the first state in the nation to allow family members and intimate partners to directly petition a judge to temporarily remove firearms from a family member if they believe there is a substantial likelihood that the family member is a significant danger to himself or herself or others in the near future. The law, passed by the California legislature, takes effect Jan. 1, 2016.
The use of personalized music playlists with tempo-pace synchronization increases adherence to cardiac rehab by almost 70 per cent—according to a study published in Sports Medicine –Open.
“Cardiac rehab has been proven to improve long-term survival for someone who’s had a heart event by 20 per cent,” said Dr. David Alter, Senior Scientist, Toronto Rehab, University Health Network, and Institute for Clinical Evaluative Sciences. “Our challenge is there is a high drop-out rate for these programs and suboptimal adherence to the self-management of physical activity.”
In Dr. Alter’s study, each research subject’s personalized playlist was the music genre they enjoyed with tempos that matched their pre-determined walking or running pace.
“The music tempo-pace synchronization helps cue the person to take their next step or stride and helps regulate, maintain and reinforce their prescribed exercise pace,” explained Dr. Alter, who is also Research Chair in Cardiovascular Prevention and Metabolic Rehabilitation at Toronto Rehab, UHN.
New research shows that infections can impair your cognitive ability measured on an IQ scale. The study is the largest of its kind to date, and it shows a clear correlation between infection levels and impaired cognition.
“Infections can affect the brain directly, but also through peripheral inflammation, which affects the brain and our mental capacity. Infections have previously been associated with both depression and schizophrenia, and it has also been proven to affect the cognitive ability of patients suffering from dementia. This is the first major study to suggest that infections can also affect the brain and the cognitive ability in healthy individuals.”
“We can see that the brain is affected by all types of infections. Therefore, it is important that more research is conducted into the mechanisms which lie behind the connection between a person’s immune system and mental health,” says Michael Eriksen Benrós.
He hopes that learning more about this connection will help to prevent the impairment of people’s mental health and improve future treatment.
Along with other functions, mainly in the formation of mother-infant bonding, the rosy glow of the “love hormone” seems to know no bounds – and its potential application for helping to cement and maintain loving relationships is clear. Its effects on facilitating social interaction have made it an appealing possible therapeutic tool in patients who struggle with social situations and communication, including in autism, schizophrenia and mood or anxiety disorders.
Even better, it is very easy to use. All the human studies on it use intranasal sprays to boost oxytocin levels. These sprays are readily available, including through the internet, and appear safe to use, at least in the short term – no one yet knows whether there is any long-term harm.
In the past few years, however, concerns expressed by some researchers have begun to rein in the enthusiasm about the potential applications of oxytocin as a therapeutic tool.
Recent studies are showing that the positive effects can be much weaker – or even detrimental – in those that need it the most. In contrast to socially competent or secure individuals, exposure can reduce cooperativeness and trust in those prone to social anxiety. It also increases inclination for violencetowards intimate partners. Although this is seen only in people who tend to be more aggressive in general, these would be the same people who might have most to gain from such a treatment, were it available.
These apparently paradoxical effects are hard to explain, particularly since the brain mechanisms responsible are still poorly understood. But a new study may help to provide the answer. A team from the University of Birmingham decided to tackle the issue by comparing studies on the effects of oxytocin with those of alcohol and were struck by the incredible similarities between the two compounds.
Alcohol and Oxytocin
Like oxytocin, alcohol can have helpful effects in social situations. It increases generosity, fosters bonding within groups and suppresses the action of neural inhibitions on social behavior, including fear, anxiety and stress.
But, of course, acute alcohol consumption also comes with significant downsides. Aside from the health implications of chronic use, it interferes with recognition of emotional facial expression, influences moral judgementsand increases risk-taking and aggression. And as with oxytocin, the increase in aggression is limited to those who have an existing disposition to it.
The researchers argue that the striking similarities in behavioral outcome tell us something about the biological mechanisms involved. Although oxytocin and alcohol target different brain receptors, activation of these receptors appear to produce analogous physiological effects. Indeed, they also note similarities with how other compounds work, including benzodiazepines, which are commonly used to treat anxiety. Our understanding of how one chemical elicits its effects might thus help us to understand the action of the others.
But, if this new interpretation is correct, it may presage further bad press for the love hormone. It may be that the darkening clouds that threaten to tarnish its reputation are only just beginning to gather. At the very least, it should give us cause for careful evaluation before we rush into using it as a remedy.
…Safer Choice is our label for safer chemical-based products, like all-purpose cleaners, laundry detergents, degreasers, and many others. Each day, consumers, custodians, cleaning staffs, and others use these products, and families, building occupants, and visitors are exposed to them. The Safer Choice program ensures that labeled products—and every ingredient in them—meet the program’s stringent health and environmental criteria—and perform well, too.
So how can you help people make safer choices?
First, look for products with the Safer Choice label in stores this summer. By choosing products with the Safer Choice label, you’re driving the development of greener chemicals and supporting over 500 manufacturers and retailers that participate in our program.
This database links over 14,000 consumer brands to health effects from Material Safety Data Sheets (MSDS) provided by manufacturers and allows scientists and consumers to research products based on chemical ingredients. The database is designed to help answer the following typical questions:
What are the chemical ingredients and their percentage in specific brands?
Which products contain specific chemical ingredients?
Who manufactures a specific brand? How do I contact this manufacturer?
What are the acute and chronic effects of chemical ingredients in a specific brand?
What other information is available about chemicals in the toxicology-related databases of the National Library of Medicine?
The Fifth Annual Benchmark Study on Privacy and Security of Healthcare Data by the Ponemon Institute, sponsored by ID Experts, reveals a shift in the root cause of data breaches from accidental to intentional. Criminal attacks are up 125% compared to five years ago replacing lost laptops as the leading threat. The study also found most organizations are unprepared to address new threats and lack adequate resources to protect patient data. Download the study to learn more.
BLOOMINGTON, Ind. — Over nearly 15 years spent studying ticks, Indiana University’s Keith Clay has found southern Indiana to be an oasis free from Lyme disease, the condition most associated with these arachnids that are the second most common parasitic disease vector on Earth.
He has also seen signs that this low-risk environment is changing, both in Indiana and in other regions of the U.S.
A Distinguished Professor in the IU Bloomington College of Arts and Sciences’ Department of Biology, Clay has received support for his research on ticks from over $2.7 million in grants from the National Science Foundation-National Institutes of Health’s Ecology and Evolution of Infectious Diseases Program and others.
Clay’s lab has found relatively few pathogens in southern Indiana ticks that cause common tick-borne diseases compared to the Northeast and states like Wisconsin and Minnesota.
But Lyme disease has been detected just a few hours north of the region around Tippecanoe River State Park and Lake Michigan’s Indiana Dunes, and Clay said the signs are there that new tick species, and possibly the pathogens they carry, are entering the area.
“Just in the past 10 years, we’re seeing things shift considerably,” Clay said. “You used to never see lone star ticks in Indiana; now they’re very common. In 10 years, we’re likely to see the Gulf Coast tick here, too. There are several theories for why this is happening, but the big one is climate change.”
A vector for disease
The conclusions are drawn from years of work spent mapping tick boundaries and disease risks, but the exact cause of the shifting borders is not clear. In addition to changing temperatures, Clay references changes in animal populations, including deer, which provide large, mobile hosts for the parasites.
Exploring a new frontier of cyber-physical systems: The human body
[Series of videos follows, unfortunately unable to insert videos]
Simulation of electrical impulse propagation through the heart during ventricular fibrillation. Colour represents the transmembrane potential–i.e. the voltage across the cell membrane–on the surface of the heart. Yellow indicates largest potential values while dark red represents resting level. This simulation was performed by Pras Pathmanathan and Richard A. Gray at the US Food and Drug Administration (FDA), using the software package Chaste, together with a high-resolution anatomically-detailed computational mesh of the rabbit ventricles, and was run on FDA’s high performance computing resources in the Center for Devices and Radiological Health (CDRH).
Credit: Pras Pathmanathan and Richard A. Gray, US Food and Drug Administration (FDA)
Extreme heat kills more people in the U.S. than any other weather-related event, and scientists generally expect the number of deadly heat waves to increase as the climate warms. According to new research, exposure could increase four- to six-fold by mid-century, due to both a warming climate and a population that’s growing especially fast in the hottest regions of the country. Using a newly developed demographic model, the scientists also studied how the U.S. population is expected to grow and shift regionally during the same time period, assuming current migration trends within the country continue. The study highlights the importance of considering societal changes when trying to determine future climate impacts. The total number of people exposed to extreme heat is expected to increase the most in cities across the country’s southern reaches, including Atlanta, Charlotte, Dallas, Houston, Oklahoma City, Phoenix, Tampa and San Antonio.
Ok, the emphasis is on “may”. “…[L]earning to swim early in life may give kids a head start in developing balance, body awareness and maybe even language and math skills.”
Am blessed to be able to swim at work during lunch. The campus has a gym, with swim privileges at the hotel pool on campus. Maybe the swim is keeping some math skills intact!
From the 20 May 2015 Science News article
Loosely based on something our mother told us, it’s that learning to swim early in life may give kids a head start in developing balance, body awareness and maybe even language and math skills.
Mom may have been right. A multi-year study released in 2012 suggests that kids who take swim lessons early in life appear to hit certain developmental milestones well before their nonswimming peers. In the study, Australian researchers surveyed about 7,000 parents about their children’s development and gave 177 kids aged 3 to 5 years standard motor, language, memory and attention tests. Compared with kids who didn’t spend much time in the water, kids who had taken swim lessons seemed to be more advanced at tasks like running and climbing stairs and standing on their tiptoes or on one leg, along with drawing, handling scissors and building towers out of blocks.
Hitting milestones related to motor skills isn’t so surprising, the authors note, since swimming is a very physical activity. A bit more unexpected, they say, are the swimming kids’ advanced skills in language and math — tasks like counting, naming objects and recognizing words and letters. Kids who swam also seemed to be better at following directions. And, in some areas, kids had proportionally better scores on the development tests relative to how long they had been taking lessons.
The authors admit that they can’t conclusively claim that swimming alone is responsible for the developmental advances because the analysis was based on survey data and limited testing with young children. “Simply, we can say that children who participate in swimming achieve a wide range of milestones … and skill, knowledge and dispositions … earlier than the normal population,” the researchers write.
PLAID (People Living with and Inspired by Diabetes) is an open access, peer-reviewed interdisciplinary research journal focused on people living with and inspired by diabetes.
Via an email from someone who subscribes to this blog. (Thank you for sharing news about this!)
PLAID: People Living with and Inspired by Diabetes. It is kind of unique in that it is trying to bridge the gap between physicians and patients. It is trying to get conversations started as well as provide access to new research in the diabetes community. Here is the link to its website: http://theplaidjournal.com/index.php/CoM
Findings confirm health risks to people living near oil and gas wells
FRISCO — Careful air sampling near active natural gas wells in Carroll County, Ohio showed the widespread presence of toxic air pollution at higher levels than the Environmental Protection Agency considers safe for lifetime exposure, according to scientists from Oregon State University and the University of Cincinnati.
The study reinforces the need for more extensive air quality monitoring in fracking zones around the country, where exposure to the poisonous emissions are likely to lead to increased risk of cancer and respiratory ailments.
“Air pollution from fracking operations may pose an under-recognized health hazard to people living near them,” said the study’s coauthor Kim Anderson, an environmental chemist with OSU’s College of Agricultural Sciences.
Anderson and her colleagues collected air samples during a three-week period last February in a highly fracked area, with more than one active well site per square mile. The study was spurred by local residents who wanted to know more about possible health risks.
The air samplers were placed on the properties of 23 volunteers living or working at sites ranging from right next to a gas well to a little more than three miles away. The samples were sent to Anderson’s lab at OSU, where the analysis showed high levels of PAHs across the study area. Levels were highest closest to the wells and decreased by about 30 percent with distance.
Even the lowest levels — detected on sites more than a mile away from a well — were higher than previous researchers had found in downtown Chicago and near a Belgian oil refinery. They were about 10 times higher than in a rural Michigan area with no natural gas wells.
The scientists said they were able to differentiate between pollution coming directly from the earth and from other sources like wood smoke or auto exhausts, supporting the conclusion that the gas wells were contributing to the higher PAH levels.
The researchers then used a standard calculation to determine the additional cancer risk posed by airborne contaminants over a range of scenarios. For the worst-case scenario (exposure 24 hours a day over 25 years), they found that a person anywhere in the study area would be exposed at a risk level exceeding the threshold of what the EPA deems acceptable.
The highest-risk areas were those nearest the wells, Anderson said. Areas more than a mile away posed about 30 percent less risk.
Anderson cautioned that these numbers are worst-case estimates and can’t predict the risk to any particular individual.
“Actual risk would depend heavily on exposure time, exposure frequency and proximity to a natural gas well,” she said.
“We made these calculations to put our findings in context with other, similar risk assessments and to compare the levels we found with the EPA’s acceptable risk level.”
The study has other caveats, Anderson said, the main one being the small number of non-random samples used. In addition, findings aren’t necessarily applicable to other gas-producing areas, because PAH emissions are influenced by extraction techniques and by underlying geology.
The study, which appears in the journal Environmental Science & Technology‘s online edition, is part of a larger project co-led by the University of Cincinnati’s Erin Haynes, OSU’s Anderson, her graduate student Blair Paulik and Laurel Kincl, director of OSU’s Environmental Health Science Center.
Patient dumping, or when a hospital discharges a homeless patient to Skid Row or onto the street, has become rare, but still does occur. With so many homeless people who require medical care, hospitals in Los Angeles and across the nation are trying to find ways to help the homeless recuperate after being discharged. There are programs in Los Angeles, but they still are few and far between. Jonathan Lopez, who is a former homeless navigator, has helped many. Many more like him are needed. From my story (Daily News, October, 2013):
The Environmental Working Group (EWG) released its 2015 sunscreen guide on Tuesday, which reviewed more than 1,700 SPF products like sunscreens, lip balms and moisturizers. The researchers discovered that 80% of the products offer “inferior sun protection or contain worrisome ingredients like oxybenzone and vitamin A,” they say. Oxybenzone is a chemical that can disrupt the hormone system, and some evidence suggests—though not definitively—that adding vitamin A to the skin could heighten sun sensitivity.
The report points to Neutrogena as the brand most at fault for promising sun protection without delivering. The EWG says that Neutrogena claims its baby sunscreens provide “special protection from the sun and irritating chemicals” and is labeled “hypoallergenic,” but it contains a preservative called methylisothiazolinone that has been deemed unsafe for use in leave-on products by the European Commission’s Scientific Committee on Consumer Safety. The company also boasts of high SPF levels like SPF 70 or SPF 100+, even though the U.S. Food and Drug Administration (FDA) says there’s only notable protection up to SPF 50, the report adds. Neutrogena did not respond to requests for comment by publication time.
In the new report, EWG also provides a Hall of Shame of products that don’t deliver on their sun protection promises, as well as a database for users to search how protective their particular sun products are—and find one that works.
To stay protected this summer, the researchers suggest, use sunscreens with broad spectrum SPF of 15 or higher, limit time in the sun, wear clothing to cover exposed skin and re-slather your sunscreen every couple hours.
I have followed this narrative for quite some time albeit inside the industry contained debate of whether so-called ‘non-profit’ [501(c)3] hospitals or their parent systems (really more aptly characterized as “tax exempt”) actually earn this financial advantage via material ‘returns’ to the communities they serve.
As can be expected you have the party line of the American Hospital Association (AHA) a trade group of predominantly non-profit members vs. that of it’s for-profit brethren The Federation of American Hospitals (FAH). You can guess which side of the argument each of them favor.
Now thanks to a recently published landmark study ‘Integrated Delivery Networks: In Search of Benefits and Market Effects’ by Healthcare Futurist Jeff Goldsmith, PhD et al, of the 501(c)3 cast of characters in the related but more often than not distinctly different ‘IDN culture’ we extend that line of inquiry into what has been a somewhat conversational ‘safe…
The US government has finally admitted they’ve overdosed Americans on fluoride and, for the first time since 1962, is lowering their recommended level of fluoride in drinking water.
The CDC reports that around 40% percent of Americans have dental fluorosis, a condition referring to changes in the appearance of tooth enamel — from chalky-looking lines and splotches to dark staining and pitting — caused by long-term ingestion of fluoride during the time teeth are forming.
The optimal fluoride level in drinking water to prevent tooth decay should be 0.7 milligrams of fluoride per liter of water (mg/L), the U.S. Department of Health and Human Services (HHS) announced Monday, down from a accepted previous fluoride level of 0.7 to 1.2 of water mg/L.
The HHS has stated that the newly recommended change is because “Americans now have access to more sources of fluoride, including toothpaste and mouth rinses, than they did…
How to Choose A Better Health App [http://www.kevinmd.com/blog/2011/08/choose-health-app.html}(by LEXANDER V. PROKHOROV, MD, PHD at KevinMD.com on August 8, 2011) contains advice in the following areas
Set realistic expectations
Avoid apps that promise too much
Research the developers
Choose apps that use techniques you’ve heard of
See what other users say
Test apps before committing
Chosing the right mHealth app can be confusing. Today, we see an array of health & mHealth mobile apps designed for consumers. But are you using them correctly, or are you wasting your precious time and money?
Whether it be for monitoring of exercise, fitness, or weight loss, or for more serious conditions like diabetes, sleep disorders, or shunt malfunction in hydrocephalus, consumers and developers would be wise to better understand how health and mHealth apps can benefit one’s health. The biggest problem I see is how health and mHealth apps are categorized, which then determines how they will be used. So I have written up a few suggestions to better help consumers and developers in selecting their mHealth apps. I have grouped health and mHealth apps into three (3) categories.
mHealth Technology, are we there yet?
First, a little info about me. I am an early designer and pioneer of a 1997 neuromonitoring app, the DiaCeph Test, intended to run as a dedicated PDA…
One possibility is that when I write about chronic illness, I am largely focusing on those conditions that are silent in nature (e.g., hypertension, diabetes, high cholesterol, obesity). We made a decision some years ago to build the case for connected health around the management of these illnesses because:
They are costly. By some estimates these chronic diseases account for 70% of U.S. health care costs.
They have a significant lifestyle component. This backdrop seems an ideal canvas for connected health interventions because they involve motivational psychology, self-tracking and engagement with health messages. These chronic illnesses pose a unique challenge in that the lifestyle choices that accelerate them are for the most part pleasurable (another piece of cheese cake? spending Sunday afternoon on the couch watching football, smoking more cigarettes and drinking more beer.) In contrast, the reward for healthy behavior is abstract and distant (a few more minutes of…
Evidence-based medicine has been called “cookbook medicine” by some of its more vocal critics. This implies that evil faceless organisations like Cochrane aim to turn all healthcare workers into mindless automatons who blindly follow dictums derived solely from scientific evidence. I hope it doesn’t surprise many in that this has never been the aim of Cochrane, or EBM in general, nor will it ever be. EBM, or EBP if you prefer the term ‘practice’ rather than the more vague ‘medicine’, is a belief system that rests on three pillars (cf. five in Islam). The EBM pillars are: 1) best available scientific evidence (i.e. the purview of Cochrane and yours truly), 2) clinical experience and 3) patient preferences and values. So, the main gist is that evidence doesn’t matter – no matter how scientific – if we don’t have a clinician at hand to interpret it for the benefit of a particular patient equipped with a particular set of values. For example, in a situation where two very similar patients have the same condition, one might wish to achieve speedy return to work whereas the other might rather avoid pain at all cost. The clinician would then use his or her judgment to identify the best course of treatment for both based on experience and what us science types have to offer. However, let us now leave the two pillars of clinical experience and patient preferences to be explored in future posts so that we can chew the first a bit more.
Now, the evidence bit in EBM is often understood to mean results of systematic reviews(a fancy type of research). Inasmuch as they offer an abstracted truth devoid of context (see my earlier post on mathematical ghosts) they still need to be interpreted for use in particular circumstances. This doesn’t always have to be done for every single patient by every single clinician separately. Think of the usefulness of reinventing the wheel for every drive. Often the thinking behind the interpretation and application of evidence can be written down and made use of by many. On a population level this means drafting guidelines. However, it is important to note that when scientific evidence is freely available one does not need to wait for formal committees to grow their beards long enough to formulate official guidelines. Especially when even supposedly professional guideline developers can do a really poor job (see previous post by Margot Joosen). In fact, all informed people and communities should participate in making sense of and advocating for the use of research to back up health decisions. In the end it affects the quality of care they receive.
Information about the quality and performance of health care facilities can be confusing to consumers. Dozens of government organizations, trade groups and websites rate doctors, hospitals and long-term care facilities on all kinds of scales, from patient satisfaction to medical outcomes.
In 2008, the Centers for Medicare and Medicaid Services (CMS) attempted to simplify some of this data by creating a five-star rating system for nursing homes. The idea was that public reporting would drive improvement in care, helping nursing home residents and their families choose higher quality facilities, in turn encouraging nursing homes to improve quality to retain residents.
This data can be of limited use, however, for people whose decisions are constrained by insurance networks, cost and geography. People who are enrolled in both Medicare and Medicaid, often called “dual eligibles,” are particularly limited in their choices for long-term care. They are much more likely to have lower incomes, disabilities or cognitive impairment, and to receive low-quality health care in poor neighborhoods than other Medicare beneficiaries.
A new study in the May issue of Health Affairs by public health researchers from the University of Chicago, Harvard, and Penn confirms that despite best intentions, the new rating system exacerbated health disparities between this dual eligible group and non-dual eligible nursing home residents, i.e. those with better financial support. By 2010, two years after the system began, both groups lived in higher quality nursing homes overall, but non-dual eligible residents were more likely to actively choose a higher-rated nursing home. The gap between the two groups also increased: dual eligibles were still more likely to live in a one-star home, and less likely than non-dual eligibles to live in a top-rated home.
New research from the University of Chicago highlights a third component to that cycle: the millions of microbes that live in the intestines. These organisms respond to the same environmental cues as their host organism; their activity and metabolism is intertwined with the sleep/wake cycles and feeding schedules of the animal.
Members of a panel at Health Journalism 2015 on medical device coverage provided a variety of advice for reporters covering and of the implants, instruments and diagnostic tools common to the modern medical machine.
Moderator of the session was Chad Terhune, aLos Angeles Times reporter who recently found himself chasing an outbreak of carbapenem-resistant enterobacteriaceae (CRE) linked to dirty duodenoscopes. Contributing to the discussion were panelists USA Today investigative reporterPeter Eisler and Scott Lucas, associate director of accident and forensic investigation at the ECRI Institute.
A recent CRE outbreak at Ronald Reagan UCLA Medical Center illustrates the broader issues of medical device approval and oversight. The Olympus scopes used at the Los Angeles hospital, and at other facilities around the nation where the superbug infected patients, did not require any formal study or approval from the FDA before hitting the market because they were considered “substantially equivalent” to a previous models. Equivalency, Eisler explained, allows thousands of devices to move from labs to patients with little more than a short 510K statement that the manufacturer files with the FDA.
Only 10 percent of devices, such as those which “sustain or support life, are implanted, or present potential unreasonable risk of illness or injury” fall into the FDA’s “premarket approval’” category requiring a greater level of regulatory scrutiny, including safety and effectiveness studies, before sale.
Things are not much better once devices hit the market.
It is important to understand that medical devices seldom stand alone, he said. They are usually part of much broader systems used to deliver care safely to patients. When a patient dies after a ventilator fails, for example, it may be that alarms, communications networks or staffing protocols designed to quickly detect and report the failure did not work.
Thus, if reporters want to understand what went wrong in a specific incident, they should ask about more than the device itself. “The system approach to an investigation is key to finding the answer,” Lucas said.
Also be aware that hospitals are supposed to have detailed plans that tell employees what to do when there is a problem with a device. Sequestering machines that malfunction, and downloading data from them before it is purged, are examples of best-practice steps that reporters can ask about.
Handling the pitch
So what should a reporter do when he or she receives a glowing pitch from a local hospital about the latest device?
Terhune suggests starting with Medicare’s open payments database to see if the doctors involved have a financial interest in the device that’s being pitched. While a financial interest is not necessarily a deal breaker for coverage, it is something reporters should know about going in and make sure they can adequately address in their coverage.
When writing about transparency in health care prices and quality, journalists should expose the myths that health care providers promote. That’s the advice Francois de Brantes gave during a session on price and quality transparency at Health Journalism 2015 last month.
Providers promote the false ideas that gathering accurate price and quality data is difficult, if not impossible, and that variations in price result from the severity of illness in populations, de Brantes explained. By debunking these myths, journalists would inform policymakers and the public that there are ways to calculate the prices of medical episodes of care accurately, and that price variation can be controlled. “Price varies because of the way physicians practice,” he said.
Among those myths:
Price is a trade secret
Disclosing prices would impede the ability of health plans, hospitals and physicians to compete effectively
Revealing prices enables collusion and thus violates antitrust law
Publishing prices leads to higher health care costs.
Traits passed between generations are not decided only by DNA, but can be brought about by other materials in cells.
Edinburgh scientists studied proteins found in cells, known as histones, which are not part of the genetic code, but act as spools around which DNA is wound.
Histones are known to control whether or not genes are switched on.
Researchers found that naturally occurring changes to these proteins, which affect how they control genes, can be sustained from one generation to the next and so influence which characteristics are passed on.
The finding demonstrates for the first time that DNA is not solely responsible for how characteristics are inherited.
It paves the way for research into how and when this method of inheritance occurs in nature, and if it is linked to particular traits or health conditions.
It may also inform research into whether changes to the histone proteins that are caused by environmental conditions – such as stress or diet – can influence the function of genes passed on to offspring.
The research confirms a long-held expectation among scientists that genes could be controlled across generations by such changes.
However, it remains to be seen how common the process is, researchers say.
Scientists tested the theory by carrying out experiments in a yeast with similar gene control mechanisms to human cells.
They introduced changes to a histone protein, mimicking those that occur naturally, causing it to switch off nearby genes.
The effect was inherited by subsequent generations of yeast cells.
The study, published in Science, was supported by the Wellcome Trust and the EC EpiGeneSys Network.
We’ve shown without doubt that changes in the histone spools that make up chromosomes can be copied and passed through generations. Our finding settles the idea that inherited traits can be epigenetic, meaning that they are not solely down to changes in a gene’s DNA.
Inderscience news: Unhealthy information remedy.
Liu has developed a simple metric that can be used to analyse a document or website and ascertain just how reliable the medical information in it might be. The metric counts the number of different health or medical terms in the longest passage of a given document
From the 2 April 2015 post
A little health knowledge can be a very dangerous thing, especially if the information comes from the Internet. Now, research published in the International Journal of Intelligent Information and Database Systems, describes a new quality indicator to remedy that situation.
Rey-Long Liu of the Department of Medical Informatics, at Tzu Chi University, in Hualien, Taiwan, explains how the internet has in many cases replaced one’s physician as the primary source of health information, particularly when someone is faced with new symptoms. Unfortunately, there is a lot of misinformation and disinformation readily available on the internet via myriad websites and networking groups that might, at first sight, offer a cure, but may lead to a putative patient following a hazardous route to health.
Liu has developed a simple metric that can be used to analyse a document or website and ascertain just how reliable the medical information in it might be. The metric counts the number of different health or medical terms in the longest passage of a given document. In experiments on thousands of real web pages evaluated manually and with this “health information concentration” metric, Liu has been able to validate with precision those pages that have genuine medical information and revealed the quackery and ill-advised health pages. The approach is much more accurate than conventional web-ranking by search engines and precludes the need for natural-language comprehension by the system.
“High-quality health information should be focused on specific health topics and hence composed of those text areas that are large enough and dedicated to health topics,” explains Liu. “The empirical evaluation reported in the paper justifies the hypothesis. The result also shows that a web page that happens to have many health terms does not necessarily contain quality health information, especially when the health terms are scattered in separate areas with a lot of non-health-related information appearing among them,” he adds. “Quality health information should be written by healthcare professionals who tend to provide both detailed and focused passages to present the information.”
The metric could readily be incorporated into search engine ranking algorithms to help healthcare consumers find high-quality information working alongside more conventional, general quality ranking parameters devised by the search engine companies for detecting relevance, importance, source and author of each webpage.
“Frontiers in Psychology is an open access journal that aims at publishing the best research across the entire field of psychology. Today, psychological science is becoming increasingly important at all levels of society, from the treatment of clinical disorders to our basic understanding of how the mind works. It is highly interdisciplinary, borrowing questions from philosophy, methods from neuroscience and insights from clinical practice – all in the goal of furthering our grasp of human nature and society, as well as our ability to develop new intervention methods. The journal thus welcomes outstanding contributions in any domain of psychological science, from clinical research to cognitive science, from perception to consciousness, from imaging studies to human factors, from animal cognition to social psychology.”
Breast milk seems like a simple nutritious cocktail for feeding babies, but it is so much more than that. It also contains nutrients that feed the beneficial bacteria in a baby’s gut, and it contains substances that can change a baby’s behaviour. So, when a mother breastfeeds her child, she isn’t just feeding it. She is also building a world inside it and simultaneously manipulating it.
To Katie Hinde, an evolutionary biologist at Harvard University who specialises in milk, these acts are all connected. She suspects that substances in milk, by shaping the community of microbes in a baby’s gut, can affect its behaviour in ways that ultimately benefit the mother.
It’s a thought-provoking and thus far untested hypothesis, but it’s not far-fetched. Together with graduate student Cary Allen-Blevins and David Sela, a food scientist at the University of Massachussetts, Hinde has laid out her ideas in a paper that fuses neuroscience, evolutionary biology, and microbiology.
It begins by talking about the many ingredients in breast milk, including complex sugars called oligosaccharides. All mammals make them but humans have an exceptional variety. More than 200 HMOs (human milk oligosaccharides) have been identified, and they are the third most common part of human milk after lactose and fat.
Babies can’t digest them. Instead, the HMOs are food for bacteria, particularly the Bifidobacteria and Bacteroides groups. One strain in particular—Bifidobacterium longum infantis—can outcompete the others because it wields a unique genetic cutlery set that allows it to digest HMOs with incredible efficiency.
Why would mothers bother producing these sugars? Making milk is a costly process—mums quite literally liquefy their own bodies to churn out this fluid. Obviously, it feeds a growing infant, but why not spend all of one’s energy on filling milk with baby-friendly nutrients? Why feed the microbes too? “To me, it seems incredibly evident that when mums are feeding the microbes, they are investing on a greater return on their energetic investment,” says Hinde. By that, she means that setting up the right communities of microbes provides benefits for the baby above and beyond simple nutrition.
TOXMAP® is a Geographic Information System (GIS) that uses maps of the United States and Canada to help users visually explore data primarily from the US Environmental Protection Agency (EPA)’s Toxics Release Inventory (TRI) and Superfund Program.
It’s time to ask uncomfortable questions about the brain mechanisms that allow ‘ordinary’ people to turn violent, says Itzhak Fried.
What happens in the brains of people who go from being peaceable neighbours to slaughtering each other on a mass scale? Back in 1997, neurosurgeon Itzhak Fried at the University of California, Los Angeles, conscious of the recent massacres in Bosnia and Rwanda, described this switch in behaviour in terms of a medical syndrome, which he called ‘Syndrome E’ 2. Nearly 20 years later, Fried brought sociologists, historians, psychologists and neuroscientists together at the Institute of Advanced Studies in Paris to discuss the question anew. At the conference, called ‘The brains that pull the triggers‘, he talked to Nature about the need to consider this type of mass murder in scientific as well as sociological terms, and about the challenge of establishing interdisciplinary dialogue in this sensitive area.
What are the main features of the syndrome?
There was a myth that the primitive brain is held in check by our more-recently evolved prefrontal cortex, which is involved in complex analysis, and that the primitive, subcortical part takes over when we carry out brutal crimes such as repetitive murder. But I saw it the other way around. The signs and symptoms that I gathered in my research indicated that the prefrontal cortex, not the primitive brain, was responsible, because it was no longer heeding the normal controls from subcortical areas. I called it ‘cognitive fracture’ — the normal gut aversions to harming others, the emotional abhorrence of such acts, were disconnected from a hyper-aroused prefrontal cortex. I also proposed a neural circuitry in the brain that could perhaps account for this. In brief, specific parts of the prefrontal cortex become hyperactive and dampen the activity of the amygdala, which regulates emotion.
If mass murder happens because of activity in the brain, what does this say about personal responsibility?
Perpetrators of repeated killings have the capacity to reason and to solve problems — such as how best practically to kill lots of people rapidly. Proposing the existence of a syndrome does not absolve them of responsibility.
From the report (The 2015 World Happiness Report and supplemental files are available for download for free at this link)
The World Happiness Report is a landmark survey of the state of global happiness. The first report was published in 2012, the second in 2013, and the third on April 23, 2015. Leading experts across fields – economics, psychology, survey analysis, national statistics, health, public policy and more – describe how measurements of well-being can be used effectively to assess the progress of nations. The reports review the state of happiness in the world today and show how the new science of happiness explains personal and national variations in happiness. They reflect a new worldwide demand for more attention to happiness as a criteria for government policy.
The world has come a long way since the first World Happiness Report launched in 2012. Increasingly happiness is considered a proper measure of social progress and goal of public policy. A rapidly increasing number of national and local governments are using happiness data and research in their search for policies that could enable people to live better lives. Governments are measuring subjective well-being, and using well-being research as a guide to the design of public spaces and the delivery of public services.
Harnessing Happiness Data and Research to Improve Sustainable Development
The year 2015 is a watershed for humanity, with the pending adoption by UN member states of Sustainable Development Goals (SDGs) in September to help guide the world community towards a more inclusive and sustainable pattern of global development. The concepts of happiness and well-being are very likely to help guide progress towards sustainable development.
Sustainable development is a normative concept, calling for all societies to balance economic, social, and environmental objectives. When countries pursue GDP in a lopsided manner, overriding social and environmental objectives, the results often negatively impact human well- being. The SDGs are designed to help countries to achieve economic, social, and environmental objectives in harmony, thereby leading to higher levels of well-being for the present and future generations.
The SDGs will include goals, targets and quantitative indicators. The Sustainable Development Solutions Network, in its recommendations on the selection of SDG indicators, has strongly recommended the inclusion of indicators of Subjective Well-being and Positive Mood Affect to help guide and measure the progress towards the SDGs. We find considerable support of many governments and experts regarding the inclusion of such happiness indicators for the SDGs. The World Happiness Report 2015 once again underscores the fruitfulness of using happiness measurements for guiding policy making and for helping to assess the overall well-being in each society.
We could stop almost all psychotropic drug use without deleterious effect, says Peter C Gøtzsche, questioning trial designs that underplay harms and overplay benefits. Allan H Young and John Crace disagree, arguing that evidence supports long term use
Yes—Peter C Gøtzsche
Psychiatric drugs are responsible for the deaths of more than half a million people aged 65 and older each year in the Western world, as I show below.1 Their benefits would need to be colossal to justify this, but they are minimal.123456
Overstated benefits and understated deaths
The randomised trials that have been conducted do not properly evaluate the drugs’ effects. Almost all of them are biased because they included patients already taking another psychiatric drug.178910 Patients, who after a short wash-out period are randomised to placebo, go “cold turkey” and often experience withdrawal symptoms. This design exaggerates the benefits of treatment and increases the harms in the placebo group, and it has driven patients taking placebo to suicide in trials in schizophrenia.8
Under-reporting of deaths in industry funded trials is another major flaw.Based on some of the randomised trials that were included in a meta-analysis of 100 000 patients by the US Food and Drug Administration, I have estimated that there are likely to have been 15 times more suicides among people taking antidepressants than reported by the FDA—for example, there were 14 suicides in 9956 patients in trials with fluoxetine and paroxetine, whereas the FDA had only five suicides in 52 960 patients, partly because the FDA only included events up to 24 hours after patients stopped taking the drug.1
No—Allan H Young, John Crace
Psychiatric conditions are common, complex, costly, and often long term illnesses. More than a fifth of all health related disability is caused by mental ill health, studies suggest, and people with poor mental health often have poor physical health and poorer (long term) outcomes in both aspects of health.26
Raised standardised mortality rates and reduced life expectancy have been reported in people with psychiatric disorders such as psychosis and mood and personality disorders.27 These increased death rates are only partly because of suicide and mostly attributable to coexisting physical health disorders. There is thus a clear need for psychiatric disorders to be treated to attempt to reduce the long term harm associated with them. The key question is whether psychiatric drugs do more harm than good.
All therapeutic interventions may potentially do both good and harm, and thorough evaluation of the relative benefits and harms of a treatment should be done for psychiatric drugs no less than for any others.28 These evaluations of benefits and harms are based on group data, which have to be applied to judgments for individual patients and can therefore be advisory only; the individual’s subjective experience is crucially important to consider.
What about harms?
Worldwide, regulatory agencies are responsible for ensuring that drugs work and are acceptably safe. Postmarketing surveillance continues after drugs are licensed. This can further refine, or confirm or deny, the safety of a drug in the general population, which unlike study populations includes people with varied medical conditions. Several approaches are used to monitor the safety of licensed drugs, including spontaneous reporting databases, prescription event monitoring, electronic health records, patient registries, and record linkage between health databases.30 These safeguards work to ensure drugs available do more good than harm.30
Nevertheless, many concerns have been expressed about psychiatric drugs, and for some critics the onus often seems to be on the drug needing to prove innocence from causing harm rather than a balanced approach to evaluating the available evidence.
Whether concerns are genuine or an expression of prejudice is not clear, but over time many concerns have been found to be overinflated. A few examples may be illustrative.
IMAGE: VIOLENCE AND GENDER IS THE ONLY PEER-REVIEWED JOURNAL FOCUSING ON THE UNDERSTANDING, PREDICTION, AND PREVENTION OF ACTS OF VIOLENCE. THROUGH RESEARCH PAPERS, ROUNDTABLE DISCUSSIONS, CASE STUDIES, AND OTHER ORIGINAL CONTENT,… view more
Author Michael Stone, MD, Columbia College of Physicians and Surgeons and Mid-Hudson Forensic Psychiatric Hospital, New York, NY, provides an in-depth look at the scope of mass murders committed in the U.S. during recent decades, describing the crime as “an almost exclusively male phenomenon.” Most mass murderers have a mental illness characterized by a paranoid personality disorder that includes a deep sense of unfairness and a skewed version of reality. Unfortunately, this profile of the men who have committed mass murders has often led to the unwarranted stigmatization of the mentally ill as a group as being inherently dangerous, which is not the case.
Dr. Stone points in particular to the growing availability of semiautomatic weapons as a key factor contributing to the increasing rate of random mass shootings in the U.S. during the past 65 years. The number of events nearly doubles in the 1990s compared to the 1980s, for example.
Around a quarter of people experience depression at some point in their lives, two-thirds of whom are women. Each year more than 11m working days are lost in the UK to stress, depression or anxiety and there are more than 6,000 suicides. The impact of depression on individuals, families, society and the economy is enormous.
Front-line therapies usually include medication. All the commonly prescribed antidepressants are based on “the monoamine hypothesis”. This holds that depression is caused by a shortage of serotonin and noradrenaline in the brain. Existing antidepressants are designed to increase the levels of these chemicals.
The first generation of antidepressants were developed in the 1950s and a second generation came in the 1980s. Products such as Prozac and Seroxat were hailed as “wonder drugs” when they first came onto the market.
In the roughly 30 years since, these kinds of drugs have come to look tired and jaded. Patents have expired and there are doubts about their efficacy. Some scientists even argue the drugs do more harm than good.
There has been no third generation of antidepressants. This is despite there having been moon-landing levels of investment in research. The antidepressant discovery process that gave rise to the earlier drugs is clearly broken. It is also apparent that this process had never worked that well, since the only real improvements over the previous 60 years were a reduction of side-effects.
By the mid-2000s the major pharmaceutical companies started disinvesting in this area. Government funding for basic research into depression and charitable funding followed a similar pattern. In 2010 GSK, AstraZeneca, Pfizer, Merck and Sanofi all announced that they had stopped looking for new antidepressants altogether. Professor David Nutt, the former government drug advisor, declared this to be the “annus horribilis” for psychiatric drug research. The likelihood now is that there will be no new antidepressants for decades.
However, there continues to be an urgent and pressing need for more effective treatments. The question the drug companies now need to ask themselves is, did they fail because the task was impossible, or did they fail simply because they got things wrong? Our view is that there was a systems failure.
Certain meditation techniques can promote behavior to vary adaptively from moment to moment depending on current goals, rather than remaining rigid and inflexible. This is the outcome of a study by Lorenza Colzato and Iliana Samara from the Leiden Institute of Brain and Cognition at Leiden University, published in Consciousness and Cognition.
Different meditation types, different effects
Colzato and her fellow researchers were the first to investigate if meditation has an immediate effect on behavior, even in people who have never meditated before. “There are two fundamental types of meditation that affect us differently,” Colzato says, “open monitor meditation (which involves being receptive to every thought and sensation) and focused attention meditation (which entails focusing on a particular thought or object).”
Today at Sarvodaya’s Early Morning meditation (Photo credit: Wikipedia)
36 people who had never meditated before participated in this experiment. Half of the people practiced open monitor meditation while the other half practiced focused attention meditation for 20 minutes, respectively. After meditating, Samara asked participants to perform a task during which they were required to continuously adjust and adaptively discriminate irrelevant information from relevant information as quickly as possible.
Meditation optimizes adaptive behavior
Compared to participants who performed OMM, people who performed FAM were significantly better in adapting and adjusting their behavior from moment to moment. Colzato: “Even if preliminary, these results provide the first evidence that meditation instantly affects behavior and that this impact does not require practice. As such, our findings shed an interesting new light on the potential of meditation for optimizing adaptive behavior.
If a time machine was available, would it be right to kill Adolf Hitler when he was still a young Austrian artist to prevent World War II and save millions of lives? Should a police officer torture an alleged bomber to find hidden explosives that could kill many people at a local cafe? When faced with such dilemmas, men are typically more willing to accept harmful actions for the sake of the greater good than women. For example, women would be less likely to support the killing of a young Hitler or torturing a bombing suspect, even if doing so would ultimately save more lives.
According to new research published by the Society for Personality and Social Psychology, this gender difference in moral decisions is caused by stronger emotional aversion to harmful action among women; the study found no evidence for gender differences in the rational evaluation of the outcomes of harmful actions.
“Women are more likely to have a gut-level negative reaction to causing harm to an individual, while men experience less emotional responses to doing harm,” says lead research author Rebecca Friesdorf. The finding runs contrary to the common stereotype that women being more emotional means that they are also less rational, Friesdorf says. The journal article was published online in the Personality and Social Psychology Bulletin on April 3, 2015.
“..today we must assume that if our generation is suffering hardship, violence or the like, not only will we struggle to forget these difficult periods ourselves but our genes too will remember them, carrying traces to be passed on to the next generation or even several generations.”
From the 8 May 2015 post at Scilogs
The science behind a rapid paradigm shift
When the first human genome was decoded, popular thinking went: “If we know the genes, we know the person.” Today, barely 15 years later, science is in the middle of an exciting new area of research, which is entertaining interested members of the public with exciting, if not always serious, headlines. The field alleges that traumatic experiences can be passed down through the generations and even significantly affect the lives of grandchildren. As it turns out, the reality is that genes not only control, but are also controlled. And that is what epigenetics is all about – how are genes controlled and what factors can influence them?
Epigenetics refers to the meta-level of genetic regulation. Under the influence of external factors, epigenetic mechanisms regulate which genes are turned on and off. This helps our fixed genetic material to be more flexible. At the biochemical micro level, epigenetic regulators are responsible for how closely packed individual genomic regions are and therefore how accessible or not they are. This works by small adhered or detached chemical groups. The resulting marking of the genome is read by specialised enzymes that then cause the switching on or off of the genes.
As reasonable as this appears, one consequence is that we will have to say goodbye to a long-established dogma: the idea that genes are immutable in the creation of a living being. And, looking back through the history of science: was Lamarck right, after all? The 19th-century French biologist had claimed that organisms acquired traits to pass on to future generations . It is precisely this mechanism that epigeneticists are on the trail of today. Laboratory experiments with mice have demonstrated that a particular, targeted encoding of individual genes results in the changes being passed on to the offspring. Epigenetic changes, however, are so-called soft changes, as they can be undone. And that is medicine’s great hope – to be able to intervene in the control mechanism from the outside in order to be able to work against, for example, senile dementia.
At this point, the level of possible tension around this new field of research becomes clear. On the one hand, the idea that our human condition can be so strongly “manipulated” by environmental influences can be very disturbing. And rightly so. Previously, we may have had the upbeat expectation that although we are experiencing suffering, the next generation will have it better. However, today we must assume that if our generation is suffering hardship, violence or the like, not only will we struggle to forget these difficult periods ourselves but our genes too will remember them, carrying traces to be passed on to the next generation or even several generations.
A study often mentioned in this context is based on the analysis of data collected in the Netherlands over the years of hunger in 1944-45, during which the population there suffered particularly difficult conditions. The children born at this time were not only smaller, but, as adults, had an increased risk of obesity, cardiovascular problems and neuropsychiatric disorders. In turn, their offspring were again smaller than average – despite food being in ready supply and living conditions having greatly improved.
This blog presents a sampling of health and medical news and resources for all. Selected articles and resources will hopefully be of general interest but will also encourage further reading through posted references and other links. Currently I am focusing on public health, basic and applied research and very broadly on disease and healthy lifestyle topics.
Several times a month I will post items on international and global health issues. My Peace Corps Liberia experience (1980-81) has formed me as a global citizen in many ways and has challenged me to think of health and other topics in a more holistic manner. (For those wishing to see pictures of a 2009 Friends of Liberia service trip to this West African country, please visit www.fol.org. My photo album is included).
Do you have an informational question in the health/medical area?
Email me at email@example.com I will reply within 48 hours.
My professional work experience and education includes over 10 years experience as a medical librarian and a Master’s in Library Science. In my most recent position I enjoyed contributing to our library’s blog, performing in depth literature searches, and collaborating with faculty, staff, students, and the general public.
While I will never be be able to keep up with the universe of current health/medical news,
I subscribe to the following to glean entries for this blog
Krafty (Medical)Librarian,” a collection of writings from Michelle Kraft on items of interest to medical librarians. She tends to write on technology and medical libraries but she also writes about things in general on librarianship, medicine and health”
Research Buzz, “news about search engines, digital archives, online museums, databases, and other Internet information collections since 1998″
Free Government Information, a “place for initiating dialogue and building consensus among the various players (libraries, government agencies, non-profit organizations, researchers, journalists, etc.) who have a stake in the preservation of and perpetual free access to government information”
Scout Report, a “weekly publication offering a selection of new and newly discovered Internet resources of interest to researchers and educators”