A closer look at the UMC Call to Action Part 4

(This is the fourth part in a series taking a closer look at the United Methodist Church’s (UMC) Call to Action Steering Team Report.

The first part is here.)

In this post, I take a step back and look at the Report’s bigger picture.

“Increase the number of vital congregations”

The Report repeatedly emphasizes a quantifiable goal: to “increase the number of vital congregations.”  We see this goal stated on pages 14, 23, 26, and 29.  The last sentence on page 23 indicates the urgency:

Anticipated resources and the urgency to increase the number of vital congregations require a near-term reduction in scope and scale of general Church work to regain momentum.

The current webpage (screenshot) containing parts of the Report has this introductory sentence (emphasis in original):

Extensive research analyzed the factors that made our churches vital and vibrant. The biggest concern? There are so few vital churches.

I think that, technically, this goal should be restated as “increasing the percentage of congregations that are vital.”  A congregation is either vital or not vital.  (We can’t speak about “average vitality” in this situation.)  Suppose we start out with 50 vital congregations out of 100 total congregations (this is “Before” in the chart below).  With this starting point, imagine there are three different scenarios that increase the number of vital congregations to 100, as shown below:

Strictly speaking, by the single measure of “increasing the number of vital congregations,” all three of the above scenarios have succeeded equally.  Each scenario has “increased the number of vital congregations” by the same amount: 50.  But that’s absurd: it’s not enough to increase the number of vital congregations.  We’d prefer to do so most efficiently (scenario three).  The “percentage of congregations that are vital” measure captures this intuition.

If we think of our goal as “increasing the percentage of congregations that are vital,” we need to take another look at the Report’s method of counting vital congregations: the vitality index.  Specifically, we need to look at the definition of the vitality index.

A closer look at the vitality index’s definition

Suppose someone came to us and said, “We have a problem: less than half of our children are above average!  We need it so ALL of our children are above average!”  Two thoughts:

  • It’s impossible for every member of a group to be “above average.”
  • By itself, “below average” is not an insult.  It’s just a statement saying how a measurement of one member in a group compares to that same measurement for everyone else in that group.  In and of itself, above or below average tells us nothing.  It’s completely possible for an above-average performance to be sub-par (in a group of unprofitable stores, an above-average store is one that loses money more slowly than others).  In a different situation, it’s possible for a below-average performance to be outstanding (among Hall of Fame players, a player with below-average statistics among Hall of Fame players could still have been welcome on many teams).

Now let’s imagine a situation much closer to the vitality index in the Report.

Imagine that we have a professional sports league that has been suffering declining attendance.  Its commissioner decides to hire a management consultant to help turn around the problem.  The management consultant decides that the goal is to increase the over-all quality of play in the league, and to do so develops the following methodology:

  1. Identify player excellence by analyzing performance in three areas: Offense, Defense, and Leadership.
  2. Create a combined measure from the above factors in order to identify Really Good Players. “Really Good Player” here has a technical definition, and it consists of saying (roughly) that a Really Good Player is one who is above average in at least two areas. (Note that a Really Good Player is, by definition, “above average” in the sense of being above the league average. Further analysis shows that about 15% of the league’s players in any given year will turn out to be Really Good Players.)
  3. By using sophisticated statistical analysis, identify those traits that are associated with being a Really Good Players.
  4. Use those traits as the basis for more research on improving over-all quality of play in the league.

The league commissioner might ask, “What does all this have to do with improving league attendance?”  Instead, the league commissioner takes one look at this report and exclaims, “Wow, this is great! You’re identified our problem: only 15% of our players are Really Good Players!”  Two thoughts about how the commissioner’s response about the lack of Really Good Players completely misses the point:

  • The purpose of identifying Really Good Players is to specify role models in order to improve performance for the entire league.  It is not to grade good and bad players.  It is not to identify players who can do no wrong – a Really Good Player cannot use that label to justify being paid 100 billion dollars. Failure to be a Really Good Player does not by itself mean a player should be released from a team – this should go without saying.
  • In any given year, about 15% of players (and only about 15% of players) will always be  called Really Good Players.

This hypothetical situation with the league commissioner is similar to the Report.

  • The Report states that the vitality index was not intended to be a grade of individual congregations (page 37, repeating what is on page 113).
  • The vitality index as used in the Report defines a congregation as “vital” if that congregation is above average in at least two areas out of three (pages 66-67).  Based on how the vitality index is defined, I think that the vitality index will always identify roughly 15% of congregations as vital, give or take a couple of percentage points due to sampling differences.  (We can’t be sure of this: the Report fails to divulge the details of the factor analysis model.)

Remember: our restated goal is “to increase the percentage of congregations that are vital.”  If the method of counting vitality always returns the same percentage, how can we talk about increasing this percentage?

To summarize, I have no idea how the authors of the Report were planning to measure the increase in the number of vital congregations.

Why is the United Methodist Church even doing this?

Ultimately, I’m not clear on what this emphasis on “vital congregations” is supposed to accomplish.  The closest the Report gets to stating “what the problem is” would likely be the following paragraph:

While the world-wide economic crisis was an important impetus in igniting the [Call to Action] effort, the sense of urgency that propelled the work was prompted by a much wider array of factors. These included the four-decade decline in membership; an aging and predominantly Anglo constituency; declines in worship attendance, professions of faith and baptisms; and other unfavorable trends related to clergy health and job satisfaction, decreases in giving, and concerns about the vitality of our engagement with and service to communities in the United States and Europe (pages 10-11).

The Report talks about the need for quantifiable measures (e.g., page 34).  The paragraph quoted above provides some simple ones: membership; age and diversity of constituency; worship attendance; professions of faith and baptisms; monetary giving.  So what does the Report do?  It creates its own “vitality index” and talks extensively about the apparent relationships between this vitality index and “drivers of vitality.”  The Report then fails to demonstate any relationship between either the “vitality index” or the “drivers of vitality” with the denomination-wide measures mentioned above!

For example, the Report neglects to show how these drivers of vitality will change the age distribution of the constituency.  Page 194 mentions that the United Methodist Church has “approximately half the US age representation in the age 18 to 44 generations.”  Yet for all the Report’s talk of small groups, page 78 tells us that the “Number of programs for young adults and adults” “did NOT have a significant impact on vitality.”  How well are 18 to 44-year-olds represented in the so-called vital congregations?  We don’t know.  It could be that these congregations are finding other ways of connecting with this demographic.  It could also be that these congregations are still failing to include this demographic but in a more lively manner.  I repeat: we don’t know.

The Report fails to demonstrate how emphasis on “vitality” will reverse the declines quoted above.  It’s true that “Growth” is one of the factors constituting the vitality index, but it’s only one factor out of three (page 64).  We don’t know any of the factor loads, so we don’t know how much weight each indicator of vitality receives.  So what we have is an unclear and complicated relationship between “vitality” and “Growth.”  On top of this, we have to assume (a) the membership and budget trends of the past five years and the trend in giving for the past three years will all continue indefinitely into the future; and (b) the growth of these “vital” congregations will offset the total of losses throughout the denomination.  These assumptions sound more like wishful thinking instead of an analysis that’s been “verified by a thoroughly independent and objective group of experts.”

When business managers talk extensively about “quantifiable measures,” to me it’s a polite way of saying, “It’s about money.”  I simply can’t find any clear statement in the entire Report regarding what the fiscal problem is.  The closest statement we get is probably this one on page 132:

The economic model of the Local Church(s) has not been managed to harmonize the expense structure with the volume (membership/attendance) trends.

What might a “harmonized expense structure” sound like?  I can’t find any sample chords in this Report.  It’s almost as if all that’s left is to make noises in the name of “vitality.”

A simple suggestion

For all the talk of quantitative measures, I don’t see why the denomination can’t just talk about money.  Money is a problem that organizations have been dealing with for centuries.  Income has to at least match expenses: many people can understand this.

This fiscal concern is an objective measure, yet at the same time it manifests itself differently in specific congregations.  A growing suburban congregation will not have all of the same concerns as an urban congregation in a transitioning neighborhood with a transitional population.  A rural congregation with an older population will have different concerns as well.  Each of these congregations would have different concerns regarding prudent planning and sustainable stewardship.  It would be more helpful to explore commonalities among these unique experiences rather than take a high-altitude approach that pretends all congregations have the same problems.

There’s also a moral dimension to openly talking about money.  For example, more complicated measures of success can hide injustice.  Two “indicators of vitality” are “Annual giving per attendee” and “Change in annual giving per attendee over three years.”  These measures do not appear to be offset by expenses.  In addition, a congregation in a wealthy prosperous community will have an easier time excelling at these measures than a congregation in a poorer community.  This “objective” “vitality index” as defined appears to set rich against impoverished, a peculiar way of engaging in ministry with the poor.

If an organization can’t talk about its money problems openly and honestly, I think this makes it more difficult for donors to trust it with money.  The United Methodist Church is not alone in dealing with fiscal problems.

Summary

It’s difficult to know Towers Watson’s role without seeing the relevant engagement letter (perhaps that engagement letter will be released to the entire denomination soon).  Here’s my current attempt to summarize the above:

The Steering Team states that it began with an “unflinching recognition of decades of decline in membership and attendance, less engagement and influence in communities than desired, aging constituencies and leaders, and financial strains” (page 6).  It also claims that it began with a “commitment to work from a foundation of facts rather than opinions by commissioning research based on extensive data-mining and objective methods for identifying relevant trends, behaviors, and issues” (page 7).

The Report seems to think that “objective” generally means:

  • Not subject to peer review;
  • No need to disclose how a factor analysis model was created or the factor loadings of the finished model;
  • Refusing to divulge the results of the model in particular cases (page 37).

The original Call to Action group affirmed this work should start “with no preconceived ideas of what will continue, be changed, or be ended” (page 11).   I’m not impressed that a group consisting of several managers (pages 12-13) imagined that “no preconceived ideas” meant hiring management consultants.

Anyway, let’s revisit the “Examined Methodology” section from part 1:

  1. The Steering Team selected a group of measurements that would be used as “proxies” to indicate vitality in a congregation.  These proxy measurements are also called “indicators of vitality”.   (As mentioned here in part 4, note that two of these indicators appear to measure income without deducting expenses.)
  2. Using the above measurements, Towers Watson (TW) created three groups of measurements, each of these groups known as a factor.  Each of these factors was used to assign a score to a congregation. Since there are three factors, each congregation received three scores.  These three scores were used to create a vitality index.  To get to the heart of the matter: in order for a congregation to get the label “high vitality,” it had to rank in the top 25% for any of the two scores, and the top 75% for the remaining third score.  (As mentioned here in part 4, it appears that the vitality index’s primary task is identifying congregations that are “above average.”)
  3. TW calculated the vitality index for 32,228 UMC congregations.  This index classified 4,961 UMC congregations as “high vital.”  Page 69 suggests that “high vitality” is a synonym for “vital congregation.”  (As noted in part 3, the designation “vital congregation” is not shared at the same rate by congregations of different predominant ethnicities.  Failure to address this bias violates the Social Principles of the United Methodist Church.)
  4. TW used regression analysis to determine which of 127 measurements had the strongest positive relationship with the vitality index.  Those measurements that had the strongest positive correlation with the vitality index were labeled “vital drivers.”  (Part 2 points out that all we can reasonably conclude based on one study is that the “vital drivers” correlate with the Report’s definition of vitality.  Part 3 raises some additional questions regarding how useful these “vital drivers” actually are.)

The original Call to Action group “emphasized that organizational change is essential but that before redesigning existing structures, we must assess the operation, structures, and relationships of the entire system, including general agencies, the Council of Bishops, and the Annual Conferences.  The group urged the use of an outside, independent consultant to objectively guide that process” (page 12).

It’s not clear to me whether any independent consultant guided the next two points:

  • The Report’s rousing call to “increase the number of vital congregations” could be nothing more than a call to “increase the number of above-average congregations” (this is discussed above in part 4).
  • The Report fails to show how its methodology will impact the denomination-wide trends it’s meant to address (this is also discussed above in part 4).

The Afterword concludes this series.