A closer look at the UMC Call to Action Part 1

UPDATE (2012-01-04): this entire series available as PDF here.

Abstract

The United Methodist Church’s Call to Action Steering Team Report (the “Report”) is an attempt to increase congregational vitality throughout the denomination.  The Report claims to be “predicated upon sound and accurate understandings.”  In this series of blog posts, I argue that it displays basic problems in statistical analysis and interpretation.  These problems culminate with the Report’s “vitality index” calculating that a predominantly white congregation is eight times more likely to be vital than a predominantly Hispanic congregation.

I conclude with some more general observations.

Introduction

Early in the process there was strong agreement by the Steering Team that not only should eventual recommendations be rooted in the findings of the independent, outside organizational audit/assessment called for in the team’s charge from the [Council of Bishops], but that all considerations for reordering the life of the church should be predicated upon sound and accurate understandings about how to direct resources in order to foster vitality in congregations.

Call to Action Steering Team Report, page 13

Within the United States, the United Methodist Church (UMC) “ranks as the largest Mainline denomination, the second largest Protestant church after the Southern Baptist Convention, and the third largest Christian denomination” (from Wikipedia).  The UMC Council of Bishops initiated a “Call to Action” to confront declining attendance trends within the United States. The Call to Action Steering Team Report (the “Report”) is one result of this process. It aims to “increase the number of vital congregations.”

My primary interest in the report is whether it’s “predicated upon sound and accurate understandings” (as noted in the Report passage above). What helped prompt the following analysis is these words in the Report (from page 41):

Why was so much money spent to find out results that are so patently obvious?

The CTA mandate called for the use of an independent, qualified, outside expert to conduct the operational assessment of UMC structures and processes (districts, annual conferences, and the general church). The Steering Team decided early on that, given the primary role of congregations in the work of making disciples of Jesus Christ for the transformation of the world, a no less rigorous, independent, and objective approach should be used in determining best practices for building and sustaining congregational vitality and effectiveness. The existence of many competing views on the subject that are expressed in books, theories taught by leaders, and in various programs reinforced the value of investing in unprecedented data-mining research—as contrasted with opinion gathering—that objectively and systematically uses massive amounts of data to determine cause and effect relationships. That the results are similar to some conventional wisdom and theories is the good news, as are the results that challenge some of our perceptions. This gives us opportunities to build on the practical learning of many across the Church. The added benefit is that we now have a presentation of complex data with informative findings that have been verified by a thoroughly independent and objective group of experts using state-of-the-art research tools.

There are a number of things in this passage that I find confusing or troubling, but I want to focus on the claim that, basically, the Report “objectively and systematically uses massive amounts of data to determine cause and effect relationships.” There’s a problem with this claim:  It’s simply false to say a single statistical analysis can determine cause and effect.  It’s impossible to determine cause and effect solely using the statistical methods explicitly mentioned in the Report.

Now it’s true that I haven’t taken any undergraduate statistics courses for a few years. It’s also true that the statistics courses I took were associated with medical biostatistics rather than a business program. So I didn’t learn statistics in the context of the life-and-death struggles to get a report onto an executive’s desk in time to make a few extra pennies per share. Instead, I had to learn statistics in a delicate academic environment where we had time to learn the basic concepts. Unlike the crises that business statisticians face, medical biostatisticians might have to deal with the embarrassment of an ineffective and dangerous treatment needlessly killing patients.

I will be the first to admit that statistical interpretation isn’t easy. If the Report were limited to the absurd claim about cause and effect, I wouldn’t even bother to post. In further analyzing the Report, I’m of the opinion that it is a deeply flawed statistical analysis and interpretation. Since I’m currently between professional opportunities, I might as well take the time to make my concerns public.

In writing this series of blog posts, I have two audiences in mind:

  1. All who are interested in seeing statistics used responsibly. This is a unique opportunity to see how statistical analysis occurs in business: these types of reports are almost always proprietary. In this case, the client publicly released the Report. (Due to the organization and structure of the UMC, this public accessibility should not be a surprise.) I hope this becomes in the best sense of the phrase a “peer review,” devoted to illuminating a sound and accurate understanding of statistical methods.
  2. Those within the United Methodist Church who might be skeptical of the Report but also find it difficult to express this skepticism in the appropriate technical language.

A quick overview of this series of blog posts:

The rest of this post, part 1, will indicate where the Report can be obtained online (as PDF files), then provide an overview of the statistical analysis I am criticizing.

Part 2 takes a closer look at a basic truth: correlation by itself does not imply causation.

Part 3 looks at some uncomfortable consequences regarding the Report’s vitality index.

Part 4 takes a step back and looks at the big picture: does the Report satisfy its own objectives? I follow with a suggestion, then a summary of parts 1 through 4 (including another look at the “Examined Methodology” below).

The Afterword acknowledges my personal interest in this Report.

The Report’s Location

Two parts of the Report are currently available here.  “The Steering Team Report” (file name CTA_STEERING TEAM_ RPT_1-44.PDF) and “Congregational Vitality | Towers Watson Report” (file name CTA_TOWERS WATSON_RPTS_45-126.PDF).  These two files contain almost all of my citations from the Report.

Back in April 2011, the original web page hosting the Report also included the file CTA_APEXRPTS_127_248.PDF.  This included Appendixes 7, 8, and 9.  The file currently labeled “Operational Assessment | Apex Report” (file name CTA_OpperationalAssessment_ RPT_1-44.pdf) only contains Appendix 8. I briefly cite from Appendix 7 and Appendix 9.

The file “Vitality Research Explained” (file name Vitality-Research-explained.pdf) duplicates slides and commentary from the Towers Watson Report above.

Examined Methodology

I am concentrating on the Towers Watson contribution to the analysis of vital congregations.  Specifically, I am looking at the “Data Analysis” section. (I’m not qualified to comment on the Survey work.)  (As stated on pages 13-14, this Report concentrates on congregations in the United States.)  Here is a summary of the Data Analysis (page numbers are citations from the Report):

  1. The Steering Team selected a group of measurements that would be used as “proxies” to indicate vitality in a congregation. These proxy measurements are also called “indicators of vitality” (page 63).
  2. Using the above measurements, Towers Watson (TW) created three groups of measurements, each of these groups known as a factor (page 64).  Each of these factors was used to assign a score to a congregation. Since there are three factors, each congregation received three scores (pages 64-65).  These three scores were used to create a vitality index.  To get to the heart of the matter: in order for a congregation to get the label “high vitality,” it had to rank in the top 25% for any of the two scores, and the top 75% for the remaining third score (pages 66-67).
  3. TW calculated the vitality index for 32,228 UMC congregations.  This index classified 4,961 UMC congregations as “high vital” (page 68).  Page 69 suggests that “high vitality” is a synonym for “vital congregation.”
  4. TW used regression analysis to determine which of 127 measurements had the strongest positive relationship with the vitality index (pages 71, 106, 114, 121). Those measurements that had the strongest positive correlation with the vitality index were labeled “vital drivers” (page 73).

In part 2 I examine the issue of correlation and causation.