Inspired

Below you will find the note I sent to Wendy Davis’ organization, Deeds Not Words.  It is my first move toward engaging with activist groups to assist with Citizen Science.

 

To Wendy Davis at Deeds Not Words

I attended the Circle of Health event at the Texas Theater last night and was struck by the fact that the women on the panel were voicing the need for “evidence-based” care. As a nurse and researcher, searching for evidence is at the forefront of what I do. However, I realize that researchers’ own implicit bias and ignorance of the needs of women, especially those women of color, affect and the research they do to produce much of the “evidence” surrounding maternal mortality.

One case in point is the focus of the Texas Task Force on Maternal Morbidity and Mortality on the measurement of maternal mortality. The political act of measuring maternal mortality has taken center stage and even found its way into mainstream Obstetric medicine. The new “enhanced” method of measuring mortality only in the first 42 days following pregnancy and only deaths “related to pregnancy”, significantly narrowed the view of the problem, making maternal mortality a problem that can be solved by improving hospital care rather than improving the systemic bias and social injustices which so heavily influence the risk of a mother’s death. For example, the Task Force has issued no recommendations for addressing maternal mortality caused by assault even though homicide is one of the leading causes of death of women who are pregnant or have been pregnant within one year of their death.

As Marsha Jones pointed out, the researchers and clinicians serving on the task force are talking among themselves rather than with the people affected. Professionals learn primarily from their peers and see the problem through professional lenses. Doctors and nurses view the problem as a technical one which can be solved by adhering to treatment “bundles”. Economists and policy makers view the problem as one of scarce or maldistributed resources. Certainly, those views of the problem can lead to needed action but until the voices of women and their families are heard, critical knowledge of the full scope of challenges women of color face every day will remain hidden from the view of those who can ease those challenges and reduce the risk they will lead to death.

My colleagues and I at TWU have been studying infant mortality and maternal mortality for more than 20 years. Our research team is made up of nurses who have expertise in public health, midwifery, ob/gyn nursing and intimate partner violence. After 20 years we have learned a great deal about the scope of infant mortality and maternal mortality and the factors related to them. What we have not learned, however, is how to remove the systemic biases that create and sustain factors contributing to these preventable events.

I believe that President Obama’s idea of Citizen Science is the key to finding the missing evidence needed to solve problems of infant mortality and maternal mortality. Paraphrasing his words, “Science should be about all people, for all people, and by all people.” In that spirit I would like to offer whatever research skills our team can provide to groups such as those we heard on last night’s panel. Women of color are noticeably absent from the Task Force on Maternal Morbidity and Mortality and from the academic settings in which much research is carried out. Likewise, researchers are noticeably absent from activist organizations which have the will and energy to put evidence into practice. I am reaching out to Deeds Not Words to address at least one half of this deficit.

Please contact me if you see a need our team could meet.

Patti Hamilton, RN, PhD
Emeritus Professor
College of Nursing
Texas Woman’s University
940 759 2055
phamilton@twu.edu

Never too old to learn something new…..

old_dog

RAND Appropriateness Method (RAM)

I love finding something that is new to me about a topic I thought I was well informed about.  That happened recently while working on a project to create guidelines for publications about missed, rationed, or unfinished nursing care. Our work group is made up of two nurses from Switzerland, one from Italy, and me.

The graduate student from Switzerland has been responsible for setting up a Delphi-like process to gain information about what which elements of a research article are particularly important to include when reporting research results about missed, rationed, or unfinished nursing care. She settled on the Rand/UCLA Appropriateness Method (RAM).

Even though for my own dissertation in 1988 I used the Delphi method (which was also developed by RAND) I had never heard of the RAM. So, I searched for background and explanation of the method.  I also checked to see what publications in nursing had reported using the RAM.  I found very few examples of RAM being used to evaluate nursing interventions or policies. See the bottom of this post for an example that might be of interest for nurses, however.

The overview below came directly from the RAND/UCLA Appropriateness Users’ Manual.

With so few randomized controlled trials (termed the “gold standard” of research) on which to base decisions about appropriateness of nursing actions, I believe the RAM has a promising role to play in evidence-based decision making in nursing.

I would love to hear from you about your thoughts on the method and its role in nursing or in your own field.

An Overview of the Method The basic steps in applying the RAM are shown in Figure 1. First, a detailed literature review is performed to synthesise the latest available scientific evidence on the procedure to be rated. At the same time, a list of specific clinical scenarios or “indications” is produced in the form of a matrix which categorises patients who might present for the procedure in question in terms of their symptoms, past medical history and the results of relevant diagnostic tests. These indications are grouped into “chapters” based on the primary presenting symptom leading to a patient’s being referred for treatment or considered for a particular procedure.

 Figure 1: The RAND/UCLA Appropriateness Method

 An example of a specific indication for coronary revascularization in the chapter on “Chronic Stable Angina” is: A patient with severe angina (class III/IV) in spite of optimal medical therapy, who has 2-vessel disease without involvement of the proximal left anterior descending artery, an ejection fraction of between 30 and 50%, a very positive stress test, and who is at low to moderate surgical risk.

 A panel of experts is identified, often based on recommendations from the relevant medical societies. The literature review and the list of indications, together with a list of definitions for all terms used in the indications list, are sent to the members of this panel.

For each indication, the panel members rate the benefit-to-harm ratio of the procedure on a scale of 1 to 9, where 1 means that the expected harms greatly outweigh the expected benefits, and 9 means that the expected benefits greatly outweigh the expected harms. A middle rating of 5 can mean either that the harms and benefits are about equal or that the rater cannot make the judgement for the patient described in the indication. The panellists rate each of the indications twice, in a two-round “modified Delphi” process. In the first round, the ratings are made individually at home, with no interaction among panellists.

In the second round, the panel members meet for 1-2 days under the leadership of a moderator experienced in using the List of indications and definitions

 

Criteria:

  • Appropriate
  • Uncertain
  • Inappropriate

 

% of use that is:

  • Appropriate
  • Uncertain
  • Inappropriate

Retrospective: Comparison with clinical records 1st round: no interaction 2nd round: panel meeting Prospective: Clinical decision aids Increase appropriateness Literature review and synthesis of the evidence Expert panel rates indications in two rounds page 5 method. Each panellist receives an individualised document showing the distribution of all the experts’ first round ratings, together with his/her own specific ratings. During the meeting, panellists discuss the ratings, focusing on areas of disagreement, and are given the opportunity to modify the original list of indications and/or definitions, if desired.

 After discussing each chapter of the list of indications, they re-rate each indication individually. No attempt is made to force the panel to consensus. Instead, the two-round process is designed to sort out whether discrepant ratings are due to real clinical disagreement over the use of the procedure (“real” disagreement) or to fatigue or misunderstanding (“artifactual” disagreement).

Finally, each indication is classified as “appropriate,” “uncertain” or “inappropriate” for the procedure under review in accordance with the panellists’ median score and the level of disagreement among the panellists. Indications with median scores in the 1-3 range are classified as inappropriate, those in the 4-6 range as uncertain, and those in the 7-9 range as appropriate. However, all indications rated “with disagreement,” whatever the median, are classified as uncertain. “Disagreement” here basically means a lack of consensus, either because there is polarisation of the group or because judgements are spread over the entire 1 to 9 rating scale.

As discussed in Chapter 8, various alternative definitions for disagreement have been used throughout the history of the RAM. Appropriateness studies sometimes categorise levels of agreement further to identify indications rated “with agreement” and those rated with “indeterminate” agreement (neither agreement nor disagreement). Depending on how the appropriateness criteria are to be used, it may sometimes be desirable to identify those indications rated with greater or lesser levels of agreement. If necessity criteria are also to be developed, a third round of ratings takes place, usually by mail, in which panellists are asked to rate the necessity of those indications that have been classified as appropriate by the panel. The RAM definition of necessity (Kahan et al., 1994a) is that:

  • The procedure is appropriate, i.e., the health benefits exceed the risks by a sufficient margin to make it worth doing.
  • It would be improper care not to offer the procedure to a patient.
  • There is a reasonable chance that the procedure will benefit the patient.
  • The magnitude of the expected benefit is not small.

All four of the preceding criteria must be met for a procedure to be considered as necessary for a particular indication. To determine necessity, indications rated appropriate by the panel are presented for a further rating of necessity. This rating is also done on a scale of 1 to 9, where 1 means the procedure is clearly not necessary and 9 means it clearly is necessary. If panellists disagree in their necessity ratings or if the median is less than 7, then the indication is judged as “appropriate but not necessary.” Only appropriate indications with a necessity rating of 7 or more without disagreement are judged “necessary.” Comparison with Other Group Judgement Methods

The RAM is only one of several methods that have been developed to identify the collective opinion of experts (Fink et al., 1984). Although it is often called a “consensus method,” it does not really belong in that category, because its objective is to detect when the experts agree, rather than to obtain a consensus among them. It is based on the so-called “Delphi method,” developed at RAND in the 1950s as a tool to predict the future, which was applied to political-military, technological and economic topics (Linstone et al., 1975).

The Delphi process has since also come to be used in a variety of health and medical settings. The method generally involves multiple rounds, in which a questionnaire is sent to a group of experts who answer the questions anonymously. The results of the survey are then tabulated and reported back to the group, and each person is asked to answer the questionnaire again. This iterative process continues until there is a convergence of opinion on the subject or no further substantial changes in the replies are elicited.

The RAM is sometimes miscast as an example of the Nominal Group Technique (NGT). NGT is a highly structured process in which participants are brought together and asked to write down all their ideas on a particular subject. The moderator asks each person to briefly describe the most important idea on his or her list, and continues around the table until everyone’s ideas have been listed. After discussion of each topic, participants are asked to individually rank order or rate their judgement of the item’s importance on a numerical scale. Different mathematical techniques are used to aggregate the results. The RAM, unlike the NGT, begins with a highly structured list of clinical indications, and the discussion is tightly linked to the basic measurement of appropriateness.

A third group judgement method is the Consensus Development Conference. The U.S. National Institutes of Health (NIH) have a mandate to evaluate and disseminate information about health care technologies and biomedical research (Kanouse, 1989). To this end, they have developed what are known as NIH Consensus Conferences, which bring together a wide variety of participants, including physicians, researchers and consumers, who are charged with developing a mutually acceptable consensus statement to answer specific, pre-defined questions about the topic. This process includes conducting a literature review, summarising the current state of knowledge, presentations by experts and advocates, and audience discussion. These conferences frequently last 2 or more days, and do not end until the participants have agreed on a written statement.

Many European countries have developed their own versions of Consensus Conferences. At its centre, the RAM is a modified Delphi method that, unlike the original Delphi, provides panellists with the opportunity to discuss their judgements between the rating rounds. Contrary to the fears of the original developers of Delphi, experience with the RAM and the contemporaneous literature on group processes both indicate that the potential for bias in a face-to-face group can be largely controlled by effective group leadership (e.g., Kahan et al., 1994b). Thus, while panellists receive feedback on the group’s responses, as is done in the classic Delphi method, they have a chance to discuss their answers in a face-to-face meeting, similar to the NGT and NIH Consensus Conferences

The following article used a modified RAM and illustrates the application of the method to quality of life.

Improving Methods for Measuring Quality of Care: A Patient-Centered Approach in Chronic Disease Barbara G. Bokhour, Mary Jo Pugh, Jaya K. Rao, Ruzan Avetisyan, Dan R. Berlowitz, and Lewis E. Kazis Medical Care Research and Review  Vol 66, Issue 2, pp. 147 – 166

 

Planning in Complex Systems- Is that possible?

Unlike other living creatures, humans can adapt to uncertainty. They can form hypotheses about situations marked by uncertainty and can anticipate their actions by planning. They can expect the unexpected and take precautions against it.            Dietrich Dorner (1990)

Complex environments are those in which there are numerous actors/variables interacting in a system fashion.  Each variable’s state or action is constantly responding to changes in the actions or states of all the other variables.  Therefore, the behavior of entire system changes as well.

In complex situations it is impossible to predict the future state of a single variable and, consequently, the future state of the system as a whole is uncertain.

Dorner , whose words introduce this essay, is the author of, The Logic of Failure: Recognizing and avoiding error in complex situations, published in English in 1997.  He conducted research using computerized simulations of planning scenarios in order to learn more about how people solve problems when faced with complex situations.  Results of his research can be found in his early writing.

He found that people who were able to achieve desirable results within experimental simulations employed the following strategies:

  • They first gathered information in order to observe changes in the situation and develop an overall picture of all aspects of the systems involved;
  • They generated hypotheses to explain the effect of change on the system;
  • They generated plans based upon the accuracy of their hypotheses;
  • They took action based upon their plans;
  • They continued to gather information frequently in order to evaluate their progress and identify unintended consequences of their actions;
  • They did not change their actions too quickly;
  • They used self-reflexive examination and critique of their own way of acting;
  • They adapted their way of acting to the specific situation;
  • They were flexible – able to forego lengthy planning and hypothesizing when 1) the situation was time critical, 2) the risk of error was low or 3) the needed information was impossible; and
  • They did not focus all their attention on current problems, but also considered long- term developments and side effects of the actions taken.

Dorner pointed out that those who achieved desirable outcomes from their planning shared certain characteristics.  They were agile.  They “…adapted their thinking to the situation.”  They “…used a lot of small ‘local’ rules, each of which is applicable in a limited area.” In other words, successful planners in complex situations did not have a single generic approach.  They observed, planned, took action, reflected on their actions and plans, changed as the situation changed and “…adapted to the given circumstances in the most sophisticated ways.”

Dorner’s research is enlightening but it appears a bit vague and even distant from the actual cognitive and psycho-motor tasks involved in planning for change in “real world” situations. Add to that the challenge of needing to work with others to plan change, and planning in complex situations becomes wickedly complex.

I found a toolkit that I believe can bring Dorner’s work into organizations where individuals and groups constantly deal with complex situations.  The toolkit was developed by Tom Wujec.

I first heard Wujec speak in a Ted Talk.  However, the approach he uses is not a new one. I was a member of a community planning group working the Philippines in the early 1990s.  We used a very similar approach to planning and others have modified that simple, but effective framework:  The early framework steps included:

  • Sharing the group’s concept of the problem or circumstance needing change;
  • Articulating and visually representing a shared vision of exactly what you want to achieve;
  • Taking a detailed inventory of the barriers to reaching the vision;
  • Developing specific strategies and tactics for removing the barriers;
  • Taking frequent measurements of successes and/or setbacks in removing the barriers; and
  • Repeating the steps until the vision is realized or revised…

 

Wujec’s method of shared conceptualization, reflection, creativity, and listening lends itself beautifully to planning in today’s complex organizations. Wujec’s toolkit is comprehensive, detailed, and comes with lots of examples of the method in action.

I think that planning for change in today’s complex environments can be overwhelming if it comes with an expectation that “failure” is not acceptable.  To avoid a sense of defeat when plans do not immediately result in desirable outcomes I believe we need to do the following:

  • Involve as many individuals and departments as possible when planning. The diversity, valuable insights, and energy resulting when representatives from throughout the system come together cannot be overemphasized.
  • Prioritize, taking into account the costs that come with planning for change.  Planning together in groups can be costly in time, energy, and resources.  Demands for change are accelerating and organizations cannot possibly address every needed change with equal intensity and allocation of resources.
  • Accept the possibility that success will be incremental and iterative. That is to say, when one approach does not achieve the goal, the process can continue until a satisfactory level of success is reached.
  • When necessary, build into the change process a “fail safe” plan. In other words, where safety and quality are of vital importance any plan for change should include an early warning system where unintended consequences can be identified and addressed rapidly.
  • Recognize that no change is ever permanent. Either the original need for change will shift or disappear, or the change that was effective at one point in time no longer achieves good results.
  • Make peace with the idea that change is not a sign of failure or malfunction. Rather, change is the new “steady state”.

 

 

 

 

 

Grey’s Anatomy Tackles Maternal Mortality:Sort of…..

Kudos to Grey’s Anatomy for raising the issue of maternal mortality!!

It isn’t often that a TV shows tackles a research interest of mine but Grey’s Anatomy used maternal mortality as an episode’s theme recently.  Of course, their take on the issue was hospital-centric.  As a result, viewers may have been left with the impression that all risk of death takes place during the immediate pre- and post-partum periods.   The Texas Task Force’s Biennial Report in 2016 stated the majority of maternal deaths took place more than 42 days after the end of pregnancy.

Definitely, we need to improve hospital care for mothers and babies.  We need to remember, however, that mothers are at risk throughout pregnancy and for at least one year after pregnancy ends.

The Having Kids website does a good job of pointing out where recent cuts in health care benefits and spending play a big part in rising rates of maternal mortality.  The problem is definitely complex and solutions will require the political will to address this issue.

What are your thoughts about Havingkids.org and their fair start model?

 

Texas’ New Method of Measuring Maternal Mortality: Is it Valid?

I had intended my next post to be about Thinking Outside the Box but there has been an argument brewing in the news and in academic journals that I think is worth mentioning first.


Abstract:
The argument started in 2016 when Marian MacDorman and her colleagues published findings showing an exceptionally high maternal mortality rate in Texas.   MacDorman’s paper was picked up by public media including National Public Radio and the Dallas Morning News and the startling news spread across the state and the U.S.  In 2013 the Texas Legislature had established a Task Force to monitor maternal mortality and MacDorman’s report focused increased attention on their efforts.  In 2016 the Task Force issued its biennial report showing that most maternal mortality in 2011-2012 had occurred more than 42 days following the end of pregnancy and that drug overdose, homicide and suicide were also recorded as underlying causes of maternal mortality.  The Texas Legislature, in 2017, expanded the scope of the Task Force to include monitoring maternal morbidity as well as mortality.  As often happens, the expanded scope did not come with commensurate funding.

In April, 2018, a group of authors from the Task Force published a paper in the journal, Obstetrics and Gynecology, in which they challenged MacDorman’s method of analysis and claimed to have developed an “enhanced” method for studying maternal mortality.  Below I will give you my view of the issues at stake and will explain why I do not believe the Task Force’s method should be considered an “enhanced” method of measuring the extent of risk and incidence of maternal death.  Instead, it should be considered a method which focuses Task Force efforts on understanding what occurred to result in mortality in a specific subset of cases.  Their new method limits the definition of maternal mortality to those deaths within the first 42 days following the end of pregnancy and does not include causes of death in such women from overdose, homicide, or suicide. The Centers for Disease Control (CDC) states that codes referring to maternal death include not only the first 42 days following the end of pregnancy, but also the later period lasting from 43 to 365 days following pregnancy. The Task Force method is hospital-centric in that it appears to focus on deaths from physiological complications which can be managed in hospital settings during the early postpartum period ignoring the more intractable causes of death that women face after the postpartum period ends. I am not calling the method unsound methodologically speaking.  Rather, I find it lacks validity in usefulness for comprehensively identifying and addressing the roots of maternal mortality in Texas.

The implications of the suggested “enhanced” method with its suggested omissions are great.  The method:

  • Focuses only on typical co-morbid conditions treatable during and immediately following pregnancy
  • Omits the later part of the postpartum period when access to services are often less accessible than during pregnancy and the first 42 days following the end of pregnancy
  • Ignores important evidence-based causes of maternal mortality such as drug use, homicide, and suicide
  • Uses a single year’s data to justify its claim of improvement over other methods of measuring maternal mortality
  • Provides little, if any, direction for developing interventions that are innovative and reach beyond hospital walls.

 

What is Research Validity?

I want to be transparent in my reasoning so I will share what I have long believed to be true about judging the validity of research.

 

brinberg_mcgrath

Years ago I  used a book  titled Validity and the Research Process by Brinberg and McGrath (1986)  to teach the final synthesis course in TWU’s Nursing PhD program .  That one book addressed issues of construct validity, measurement reliability and validity, external and internal validity, and real-world issues involved in conducting research and applying the resulting findings.  The approach the authors took was not one of simply describing how to understand key research principles but, rather, how to link the positive and negative results involved in combining these principles so that the results were valid.  That book is a treasure that I still find applicable all these years later.

My discussion in this post involves, primarily, Brinberg and McGrath’s first stage of research planning and evaluation which they call Validity as Value. By value they mean the worth, usefulness, or importance of the research.  Rigorously carried out research is only truly valid if it yields value.

Validity as Value in the Preparatory Stage
In stage 1 the researcher should focus on finding:

  • the most valuable events and phenomenon for study
  • the most valuable method for collection, analysis of data
  • the most valuable concepts and explanations that interpret the observations

Keeping Validity as Value in mind, I want to acquaint you with an ongoing clinically, academically, and philosophically contested study of the rate of maternal mortality in Texas and the U.S. This isn’t only a research issue; the debate over just how bad the Texas maternal death rate is will likely be translated into state and even national funding decisions, voting behaviors, and, more importantly, into the lives of Texas women and their families from all walks of life.

The news paper article below appeared immediately following a scientific article  appearing in the May issue of Obstetrics and Gynecology and written by members of the Texas Maternal Mortality Task Force along with officials and staff of the Texas Department of  State Health Services (DSHS).   The public in Texas will read the news paper article below and interpret its message in various ways in light of their own opinions, experiences, and beliefs about research.

texas_tribune_error

My experiences and beliefs about the May 2018 article by the Texas Task Force  and the popular press articles reporting it derive from experiences first as a Public Health Nurse in Fort Worth,  from working with similar data for over 20 years, and from my belief that the validity of research findings should first be measured against the principles in Brinberg and McGrath’s Stage 1 guidelines.

Let’s start with the first consideration of Brinberg and McGrath’s values in Stage 1 research: the most valuable events and phenomenon for study. The Task Force obtained 147 death records from CDC’s National Center for Health Statistics for analysis.  These records were selected because they represented deaths among Texas residents whose death record indicated the underlying causes of death was related to obstetric causes within one year of the end of pregnancy in 2012. There is no question that the Task Force members realized the significance of even one death to a woman during or within one year of pregnancy and the value of knowing the extent of this problem in Texas.

The DSHS website states: Maternal Mortality and Morbidity Task Force was created by Senate Bill 495, 83rd Legislature, Regular Session, 2013. The multidisciplinary task force within the Department of State Health Services (DSHS) will study maternal mortality and morbidity. The task force will study and review cases of pregnancy-related deaths and trends in severe maternal morbidity, determine the feasibility of the task force studying cases of severe maternal morbidity, and make recommendations to help reduce the incidence of pregnancy-related deaths and severe maternal morbidity in Texas.

The World Health Organization (WHO) has stated “Maternal mortality is a health indicator which shows very wide gaps between rich and poor, urban and rural areas, both between countries and within them…”  The Texas Legislature, which approved and funded the Task Force, the Task Force Members, popular media such as the Texas Tribune and National Public Radio, and the WHO would seem all to be in agreement that studying maternal mortality passes the test for VALUE in studying these events.

Next, let’s consider the most valuable phenomenon to consider when studying maternal mortality.  The May Task Force paper states that the phenomenon for study was operationalized in such a way as to reduce the original 147 cases  to a final 56.  “For all 147 obstetric-coded deaths, a woman was considered to have a confirmed maternal death while pregnant or within 42 days postpartum 1) if her death record could be matched with a live birth or fetal death occurring within 42 days of the date of death; 2) if her medical records, autopsy or other death investigation records, or information received from contacting the death certifier indicated either pregnancy at the time of death or pregnancy within 42 days of the date of death; or 3) to err on the side of caution, if the death certificate narrative indicated pregnancy at time of death or within 42 days of the date of death when sufficient medical records were not received.”(p.764)

Death records are coded using an international classification of disease (ICD) system.  That system was updated to version 10 in 2003.  The CDC uses the following definitions when defining maternal deaths.

icd10_definitions

The Task Force chose to focus on deaths within a period of 42 days following the end of pregnancy.  They may have chosen that period  because of the greater likelihood that medical records and other death investigation results would be more likely to have included correct information on the time between pregnancy and death than if the death occurred later than 42 weeks after the termination of pregnancy (from birth, miscarriage, or abortion from any cause). However, here is one of the validity issues Brinberg and McGrath wrote about, the trade-off  between precision and realism.

In 2016 the Task Force wrote in their biennial report that the majority of maternal deaths in 2011-2012 occur later than 42 days after the end of pregnancy.  Below is the  figure reporting these results in the 2016 report. One of their findings stated, “A majority of maternal deaths occur later than 42 days after delivery Maternal deaths were confirmed by linking each mother’s information to a birth or fetal death event occurring up to 365 days prior. Time between the two events was calculated in days and a survival plot was generated to help visualize the relationship between maternal mortality and time. Figure 3 shows the percent of women in the 2011-2012 maternal death cohort who remained alive at particular points in time over the 365 days following their delivery.”(p.7)

survival_plot
Note:  The data described in the figure above refer to 2011 and 2012 while the May paper describes only 2012 data.

They go on to explain:

“The WHO defines all maternal deaths within 42 days of the end of pregnancy as pregnancy-related deaths, irrespective of the cause of death (WHO, 2016). However, close to 60 percent of maternal deaths in 2011-2012 occurred after 42 days post-delivery (Figure 3). Case review of these deaths will determine whether they were pregnancy related,-associated, or neither. Nevertheless, it is clear that women remain at-risk for the first year after their pregnancy has ended. It is possible that lack of continuity of care plays a role in these later maternal death outcomes.” (p. 7)

Next, I call your attention to the Stage 1 Vaidity as Value principle:  most valuable method for collection, analysis of data.

The Task Force reported in 2016 that the death records were matched with birth or infant death records. I have found no evidence that the Task Force’s “confirmation by linking” protocol described in their report was considered to be incorrect.   Could the decision to omit late period maternal deaths wassed on some difference in the data from 2011 and 2012?  In the absence of a clear explanation I am forced to conclude that the decision to focus only on deaths within 42 days was based entirely on the desire to be precise in identifying “true” maternal deaths.  They mention that in their two-step method to find cases  they searched birth records and found six additional maternal deaths not noted in death records but they omitted those six because death had occurred 43 or more days postpartum. They state in their May paper, “Maternal deaths that occurred 43 or more days postpartum were also excluded from analyses because these deaths occurred after the time frame of interest.” (p.764)

Finding no explicit explanation beyond a goal of precision, I must conclude precision took precedence over realism and the decision concerning which phenomenon to be studied was based more strongly on seeking an irrefutable method  than on need within the population.  That being the case, I believe it is vitally important to speak up and point out that statements such as the one appearing in the Texas Tribune article, ” ..the number of women who died dropped from 147 to 56.” is grossly misleading. Those 147 women did die.  They were not counted by the Task Force simply because their death records did not satisfy criteria  to be considered a maternal mortality  according to the Texas Task Force definition and time frame. Thus, value of the methodology and analysis chosen by the Task Force should be measured in terms of precision rather than in terms of learning about a full spectrum of causes and preventive strategies where maternal mortality is concerned.  We might call this trade-off one of specificity over sensitivity.

The Task Force’s focus on precision and specificity is understandable in light of  its charge by the legislature to “…determine the feasibility of the task force studying cases of severe maternal morbidity. “  It has taken the Task Force tremendous effort, commitment, and more than 60 months working with limited resources to complete its analysis of maternal mortality in Texas in 2012. How feasible is it to continue to carry out such costly and difficult analysis in the future? Certainly it is much more feasible to analyze 56 deaths than 147. Doesn’t Texas need a system that does not lag behind more than 5 years in its monitoring of this critical public health indicator ?

Uneven quality of information found in death records were reported in the May paper. For example, the authors wrote: “One hundred of the 147 obstetric-coded deaths were unable to be confirmed as maternal deaths. Eleven of the 147 obstetric-coded deaths (7.5%) occurred 43 or more days postpartum (outside of the 42-day time frame of interest), 74 (50.3%) showed no evidence of pregnancy after data matching and record review, and 15 (10.2%) had insufficient information to make a determination…”(p.765)

Certainly, the quality of available data can be at fault in any over-estimation of the number of maternal deaths, but as the May paper reports,  when maternal deaths were found outside of 42 days postpartum, they were not included. Also, the Task Force did not mention the equally troubling problems with accuracy of birth records used to confirm maternal deaths.  Over my 20 years of working with birth certificate data I have found them to have serious deficits in quality and completeness.  (for more information on birth record quality see these two dissertations conducted at Texas Woman’s University:  Polancich, S. (2002). Birth certificate data quality : Examining the maternal medical risk factor text field …   and Restrepo, E. (2002) Birth certificate data quality: a study of medical risk… 

Isn’t it likely that errors in birth records resulted in failure to confirm by linking birth and death records and under-estimation of maternal mortality?

Finally, I want to comment on Brinberg and McGrath’s third principle of Validity as Value, namely: the most valuable concepts and explanations that interpret the observations.  The May paper by the task force uses the term “enhanced method”  when describing their analysis of maternal mortality. Webster’s dictionary defines enhance as heighten, increase; especially to increase or improve in value, quality, desirability, or attractiveness.  I would agree that their method removed some erroneous cases of maternal mortality.  I cannot agree that their method of analyzing maternal mortality for the purpose of preventing it has been improved as a result of those removals.  The omission of mortality after 42 days postpartum and using only obstetric codes for the underlying causes of death and omitting violence, suicide, and drug overdose mask other significant sources of mortality.  The Task Force’s 2016 report included the following figure and explanation: causes_maternal_mortality

For this examination of the statewide trends, maternal deaths were identified by linking the mothers’ information from their birth/fetal death record with information from their own death record occurring within one year of pregnancy termination, regardless of the ICD-10 code assigned. Some alternative methodologies do begin by identifying all obstetric deaths, labeled with an “O-code” (ICD-10 code O00-O99). However, this approach likely fails to identify maternal deaths with non-natural causes, such as overdose, suicide, and homicide. Clearly, drug overdose, homicide, and suicide are important causes of maternal mortality not included in the “enhanced” method presented in the May paper.

I do not believe the May paper presents a convincing argument for calling their method  “enhanced”.  The authors concluded “…a method enhanced with data matching and review of relevant medical and death investigation yields more accurate ratios.” (p. 769)  True, but omitting previously identified significant causes of death from overdose, homicide, and suicide and limiting the time frame for death to 42 days or less substantially diminishes the amount of information needed to reduce overall maternal mortality. I would have less trouble if their claim was not a more accurate method but rather was a method for focusing on a verified subset of deaths for closer scrutiny.

The Task Force’s interpretation of their results rests on analysis of a single year of data (2012).  Those same data were analyzed along with data from 2011 and results included in their biennial report.  The Task Force emphasized the  importance of deaths after 42 days and the prevalence of homicide, suicide and drug overdose.

In their discussion in their May paper the authors state their method should be interpreted as finding a large number of miscoded deaths. (p. 768) Certainly, there were miscoded death records but there also was a purposeful omission of legitimate death records not conforming to their inclusion criteria. Such subtle points are often overlooked and overstated as in the Texas Tribune’s article on April 9th in which they state the number of maternal deaths were much fewer.

Maternal mortality has become a contentious issue.  One reason is because it has been suggested (without research evidence to support the claim) that the sharp increase in Texas Maternal Mortality coincided with Texas’ cutbacks on Medicaid services and reductions in funding for Planned Parenthood.

Marian MacDorman and her colleagues first reported exceptionally high rates of maternal mortality in Texas in 2016.  See Macdorman, M., Declercq, E., Cabral, H., & Morton, C. (2016). Recent Increases in the U.S. Maternal Mortality Rate: Disentangling Trends From Measurement Issues. Obstetrics and Gynecology, 128(3), 447-55.  In this paper they make a statement in their discussion that , “There were some changes in the provision of women’s health services in Texan from 2011-2015, including the closing of several women’s health clinics.” (p. 454)   They do not mention Planned Parenthood specifically.  Nevertheless, researchers supported by an organization with a strong anti-abortion agenda picked out that one statement and have challenged the findings of MacDorman and her team in the same May, 2018, issue of Obstetrics and Gynecology in which the latest paper of the Task Force appears.  The Letter to the editor challenging the MacDorman team’s research is from James Studnicki and John Fisher, of the Charlotte Lozier Institute.   They write: “Two important research and public policy issues underscore our request for a formal correction or clarification of these findings. First, the alleged 2010–2012 doubling, juxtaposed with the closing of women’s health clinics around that time, enabled a conventional wisdom to emerge and to be widely disseminated in the popular media that the closing of Planned Parenthood clinics in Texas was causing maternal deaths. In scientific terms, no such cause and effect has ever been demonstrated, nor does the MacDorman article provide the temporal sequence to suggest that relationship. “(p 934)

Value in research is difficult to obtain because demands for rigor and precision must be balanced with the messiness and ambiguity of the focal problem as it exists in the real world.  The Texas Task Force on Maternal Mortality and Morbidity is faced with an almost impossible charge to accurately assess the extent and causes of maternal mortality along with their new obligation to track severe maternal morbidity.  They have experimented with a method that is feasible and reproducible.  The cost of that feasibility is the omission of additional causes of maternal death later in the year following pregnancy.

The implications of the suggested “enhanced” method with its suggested omissions are great.  The method:

  • focuses only on typical co-morbid conditions treatable during and immediately following pregnancy;
  • omits the later part of the postpartum period when access to services are often less accessible than during pregnancy and the first 42 days following the end of pregnancy;
  • ignores evidence-based causes of maternal mortality such as drug use, homicide, and suicide;
  • uses a single year’s data to claim its improvement over other methods of measuring maternal mortality;
  • provides little, if any, direction for developing interventions that are innovative and reach beyond hospital walls;
  • is extremely costly and time consuming with lag times of 4-5 years after deaths occurred; and  and these limitations result in missed opportunities to learn what to do and what is working to reduce maternal mortality.

In the final instance, it is no less tragic if 56 women die following pregnancy than if 147 died.  As the WHO states, “Most maternal deaths are preventable, as the health-care solutions to prevent or manage complications are well known. All women need access to antenatal care in pregnancy, skilled care during childbirth, and care and support in the weeks after childbirth”.  No woman should die in Texas or elsewhere from a preventable cause.

Thinking Outside the Box

In a few weeks I will be giving a presentation and leading discussions on “thinking outside the box”.  I thought I would try out some of my ideas here on my blog.  The presentation will be for a local hospital system as part of their “Focus on Excellence”.  They are seeking new ideas for reaching ambitious performance outcomes.  I was invited to bring ideas about complexity leadership and nonlinear relationships.

Today I sat in on a presentation by a nurse leader who has a Doctor of Nursing Practice degree and observed the group work and discussions she led.  I was very impressed with the level of knowledge in the group regarding Six Sigma, Deming’s PDSA cycle, and Principles of Change.

I think I will employ the “box” metaphor by introducing four sides of the box that confine our thinking in health care and oversimplify the exquisitely complex environment within hospitals.  Those environments can become even more challenging for hospitals which operate as for-profit organizations, constrained by policies developed at a distance by persons who have little if any first-hand knowledge of the effects of those policies on actual staff and patients.

As I said, my box will have four sides.  Each side represents a limiting factor that can keep us from seeing realistic, effective, measures for improving care.  The sides include 1) the language we use to name and discuss problems and solutions; 2) our basic ideas about how the world does or should work; 3) prevailing scientific tools for solving problems; and 4) fear and mistrust of those operating within the environment.

Over the next few weeks I will cover one of those sides and would really appreciate your observations, critique, and discussion of the material.  Stay tuned.

DNA or ZNA, which affects health more?

I just read an interesting report on the importance of  geographic location on life expectancy.  As a past Public Health Nurse I can’t say I was totally surprised.  What did surprise me was the notion that our social environment can outweigh our genetic makeup when it comes to health.

The Robert Wood Johnson Foundation has a website you might find interesting. The website explains the effect of our physical surroundings on health and life expectancy.  I found a link to this map of one Texas city, El Paso. LE-Map-ElPaso-Purple

Visit this site and check to see how your county’s health ranks among Texas counties.

Can you guess which county in Texas has the longest life expectancy?  What about best health outcomes?

These ideas are influencing our research on Maternal Mortality at Texas Woman’s University.  We are mapping rates of maternal risk and maternal death as well as the locations of maternal health resources.  We are in the process of obtaining birth records at the zip code level.  However, death records are only available at the city level.  As you can see from the example of El Paso, even within a single city there can be wide variation in health and longevity related to location.