• Home
  • About Us
  • Guest Posts

Sunday, September 29, 2019

Sunday, September 29, 2019
After UC president Janet Napolitano announced her resignation, effective August 2020, the prospect of searching awoke a quotient of dread. "The Regents will pick," one Senate elder told me.  "They won't listen to us. They don't care what we think."  The idea here is that a small group of uber-regents will pop out another person whose remoteness from educational functions and faculty they will deem a virtue.  This has become a national trend: secretive searches that look for a chief executive who will preside over the university rather than develop it from within, and reflect the interests of the governing board ahead of those of the university's multiple constituencies.  Examples include presidential searches in South Carolina and Colorado this past spring.  The conflict is also present at UC (see this post for national as well as local background). 

But the UC Regents do have a formal search process.  Called Regents Policy 7101, it requires a number of steps.

The first is that the Board Chair forms a Special Committee comprised of six Regents and other ex officio members (paragraph 1).  The membership of the new Special Committee is posted here.

The Chair of the Special Committee then "consults with the full Board of Regents at the beginning of the search for the purpose of reviewing the relevancy of the criteria to be considered and approved by the Board of Regents and discussing potential candidates (paragraph 4). During the search, "all Regents will be invited to all meetings with all constituencies."  The Regents then make the final appointment, although Policy 7101 does not specify whether the full Board votes or how that vote proceeds.

The important features here are (1) the Board retains exclusive decision rights over the selection of the president and (2) every member of the Board has equal access to the meetings that constitute the search.  The Policy protects the rights of regents whom the Chair does not appoint to the Special Committee--the process is not to be controlled by the Board Chair's Special Committee or a small group of allied Regents--and affirms the Board's sovereignty over the search.

But there is also (3): in between the beginning and the end of the Policy comes a potentially huge and dynamic systemwide consultation process conjured in luxuriant description.

B. The Chair of the Special Committee will invite the Academic Council to appoint an Academic Advisory Committee, composed of not more than thirteen members, including the Chair of the Academic Council and at least one representative of each of the ten campuses, to assist the Special Committee in screening candidates.
C. The Special Committee will consult broadly with constituent groups of the University, including the Academic Advisory Committee appointed by the Academic Council, Chancellors, Laboratory Directors, Vice Presidents, students, staff, and alumni. To facilitate consultation, there shall be appointed advisory committees, each with no more than twelve members, of students, staff, and alumni. The student advisory committee shall be appointed by the Presidents of the graduate and undergraduate student associations and shall include at least one student from each campus. The staff advisory committee shall be appointed by the Chair of the Council of UC Staff Assemblies and shall include at least one staff member from each campus. The alumni advisory committee shall be appointed by the President of the Alumni Associations of the University of California and shall include at least one alumna or alumnus from each campus. Such consultation will be for the purpose of (1) reviewing the relevancy of the criteria approved by the Board of Regents and (2) presenting the nominee or nominees to members of the groups at the conclusion of the search.
In classic UC style, the executive decision making body has parallel advisory groups that allows the appearance of consultation but which it can also ignore.  Hence the pessimism of some Senate elders. On the other hand, the advisory committees have a power of self-constitution and also activity.  The only stated rule is a cap on the number of members. The named advisory committees are:
  • Academic Advisory Committee
  • Student Advisory Committee
  • Staff Advisory Committee
  • Alumni Advisory Committee
The Policy puts no limitations on the activities of the committees.  How do these Advisory Committees (ACs) actually influence the Special Committee and the overall Board?

The standard theory is prestige: find the most prominent or trusted insider from each campus and create what management theorist Clayton Christensen likes to call a "heavyweight team."  In the case of the Academic Advisory Committee (AcAC), prestige theory assumes that the regents recognize academic (or senate service-based) prestige and would honor it by adapting their views.  Each heavyweight would be recognized as speaking authoritatively for the (leadership of the) particular campus.

Here's the problem: I know of no evidence that the last three presidential searches have worked this way; the evidence I do have suggests the opposite.  Business culture does not respect academic culture, the class gaps between professors and most regents are too wide, and the key feature of Christensen's heavyweights--decision rights--is stripped from the ACs. 

If this isn't enough to undermine AC leverage, there's also the structural weakness of the committee.  With the AcAC, each campus gets one person to represent its ladder faculty; this committee has a maximum of 13 people for a systemwide ladder faculty of over 11,000 (pdf p 94).   This faculty is divided among 10 campuses, between campuses and medical centers, across all the disciplines, which have diverse needs, and across racial groups, which also have diverse needs.  The idea of one person representing hundreds or thousands of their colleagues makes no epistemological (or political) sense.  It is also a recipe for an incoherent voice coming out of the AcAC, which Senate handpicking of membership can ease only at the price of lost diversity of views.

But the UC advisory committees could affect the presidential search, by using their committees to prompt campus discussions about the presidential search in the context of the immediate future of UC.  All of the Advisory Committees could set up a series of events in which they talk with their constituents on each of 10 campuses.  They listen to hopes and fears, gather ideas about leadership needs, hash them over, and then transmit the resulting comments, recommendations, or demands to the Special Committee.  One faculty member suggested a "UC Day" in which town halls happen across the UC system at the same time. The ACs would have to identify a deadline that would fall before the Special Committee's long-listing and short-listing of candidates such that it (and the Board overall) could fully consider the input.  Each committee could do its work in about 6 weeks--2 campus visits a week (if not all done at once), plus a week to debate, formulate, and forward recommendations.  The scope of the issue is limited and the reports should be short.

Another benefit of using the ACs as a public fulcrum: town halls and other public events would be newsworthy.  Whatever they think of professors, unions, and students, governing boards do care about institutional reputation, media coverage, and what they hear back from VIPs as a result of that.  They also care about the public debates and collective movements that shape public opinion and apply political pressure.  A recent example is the issue of food insecurity and student homelessness.  For years, the Board were told UC financial aid took care of low-income students and they took no action to mitigate student poverty.  Then, sometime after Bernie Sanders put free college on the political map in late 2015, the media started covering student hunger and homelessness.  The UC Regents responded by forming a Special Committee on Basic Needs in late 2018.  The actual results have a long way to go, but the point is that governing boards do respond to public discourse, eventually, academic discourses included.

In short, though UC governance has a top-down 19th century structure, the Regents are most likely listen to faculty, students, alums, and staff under three conditions: their Advisory Committees (A) represent a real constituency brought together by a consultation process that (B) speaks publicly about its views of the University in a way that (C) publicly (re)frames the University's needs for its next president.  The idea is to create an interest, a buzz, an excitement, a university-wide discussion over what we do and don't need, and, more importantly, to construct a constituency which then builds discourses that have an institutional and political existence.  There are no guarantees, but the wager is that the state's media would cover a process in which a university system holds a discussion about its current goals and consequent leadership needs on all ten campuses.   The process would upgrade the level of public discussion about California higher ed both inside and outside the University.

This process would also help locate potential presidents with one vital skill, which is gathering exactly this kind of information from their own institutional grassroots.  This might seem irrelevant to the president's main job of political lobbying, but it is not. Recent history shows that a president without deep knowledge of the university's daily life simply cannot make the statewide case for the University's public benefit and fiscal needs.  UC's advisory committees could set an example of the creation of this kind of profound, inspiring knowledge that the University needs in its next president. 

I do hope the current Academic Senate leadership, Chair Kum-Kum Bhavnani and Vice Chair Mary Gauvain, rapidly set up a systemwide faculty fact-finding and deliberative process via the Academic Advisory Committee, details TBD. UC needs a new president with deep understanding of the University's issues, people, and potential, and the ability to learn directly from them.

Photo credit


Wednesday, September 18, 2019

Wednesday, September 18, 2019
If you work at the University of California, your Office of the President has committed you to producing 200,000 additional degrees by 2030, on top of the one million degrees already expected.  This post is a plea to the hundreds of thousands of faculty, staff, and students who will implement this 20 percent increase.  It's a plea to analyze the material conditions this increase requires, and to work actively for the right ones.

Can this expansion really happen without creating degree-mill conditions, and making life even harder for more vulnerable students?  What new resources will it take to make 1.2 million degrees a great thing for students, for the state, and for UC research?  Materials for this week's Board of Regents meetings offer some clues.

The stakes are high because of the high cost of making unfulfillable promises about core social needs (health, education, housing, work).  Health care is Exhibit A: former Obama administration officials (like the Crooked Media crew) say they all saw the Affordable Care Act as a big step towards what many of them wanted but couldn't yet get from Congress, which was Medicare for All.  But the ACA's compromised design created widespread user disappointments.  These weakened political support both for the ACA and for Medicare for All, making it harder to protect the first and get the second.  Obama officials also spent a lot of time denying that they even wanted Medicare for All ("single payer" as it was often called), since they were afraid of the charges of socialized medicine that they of course got anyway, so they buried their core framing principle (call it equal access to a human right free of market allocation by ability to pay).  People did fight to keep their government backstop on health premiums, but ten years later Medicare for All is still a ways out of reach.

An analogy in higher ed is the effect of overcrowding on undergraduate satisfaction.   The University of California has been producing extra degrees by taking extra students, half or more without state payment, off and on for 15 years.  For the recent history, see the very useful Regents' item F11, Display 1. You might think this would earn UC budget chips it could cash in for state general funds later, but there's no evidence that this has ever happened. If anything, taking unfunded students teaches the state UC can make do with less money per student, and perhaps even zero.  This is a bad precedent for the extra 200,000 students to come.

UC campuses see overcrowding as a tacit and necessary revenue strategy: even those students who don't bring state money still pay tuition.  Item F11 notes statistical costs: "the number of students per ladder-rank and equivalent faculty member, which has grown from fewer than 25 in 2004-05 to more than 28 in 2017-18" (p 5).  Student to core staff ratios have also risen, from 11.5 in 2007-08 to 15.6 students per staff member ten years later.  Ratios of students to frontline staff are in my experience grossly higher. 15.6 may reflect the number of RAs whose payroll is handled by a research center budget officer.  A departmental academic advisor's ratio may be 200:1, 500:1 or 1200:1.

The "user" cost appears in survey data.
Compared to 2006,
  • students are much less likely to strongly agree with the statement,  "Knowing what I know now, I would still choose to enroll at my UC campus";
  • a declining percentage of students are able to get into their first-choice major; and
  • students are less likely to know at least one professor well enough to ask for a letter of recommendation.
Whoever wrote Item F11 chose fundamental issues very well.  To what extent does UC enrollment depend on ignorance of UC realities?  Choosing a major is a cornerstone of the U.S. higher ed system: how many UC students are forced into a second or third choice? (Do read Zach Bleemer on the cost of being forced out of a first choice major.)  Finally, and rudely, is contact time with professors so much greater at public universities than at online services?  UC compares itself to private university peers in terms of research quality and faculty salaries. These three items are key private college strengths that UC has for many years been unable to match.  

What new funding is UC requesting to redress these issues? It's the standard modest proposal: tuition increases beaten back to the rate of inflation (there's a cohort-based tuition plan that's getting student attention).  A 3-4 percentish increase in state funds.  Throw in some non-resident tuition increases and some other little stuff.  Given current baselines in state funds and tuition revenues (page 14),  that would mean annual combined increases in chunks of $300 million, one chunk per normal year.

Here's a picture of the background:

Instructional expenditures are 80 percent of what they were in 2000-01 (actually less than that, because capital costs, bond interest, and pension contributions now come out of state general funds; averages are also much higher than undergrads will experience in most majors; but never mind that here).  This is true even though net tuition has doubled in that time (and tripled since 1990).  The state has also doubled the share of tuition that it picks up through Cal Grants, which is a political sore point.  Everybody's unhappy enough with the status quo to fail to give the university their strong support.

What would make people happy? Let's ballpark this.  Say average spending of 2000-01 levels would allow hiring more faculty and staff, easing restrictions on student access to faculty and to first-choice majors and upgrading the overall learning experience. We'd need another $5000 per student, all of it from the state to avoid tuition increases.  On 230,000 or so undergraduates, that means around $1.1 billion on top of inflation-covering increases we get right now.   If you did it in one year to avoid various complications, that would be a one-year increase of about 30 percent in general funding, or ten times the typical increase of recent years.

And if we need to produce 200,000 extra degrees, that's an additional 20,000 per year each year.  Funding another 20,000 students at the full cost of $25,000 means another $500 million on top of our $1.1 billion.   These are crude, round numbers, but they are in the ballpark of the real costs of doing the better quality at the bigger scale to which UCOP has in fact committed us.

It's hard for us to imagine the state stepping up like this.  That thought turns the gaze to tuition, which was the regents' answer (7-10 percent annual tuition increases were in the 2005 Compact with the state) until Jerry Brown shut it down.  It's hard not to go to tuition when it's the solution built into the  political and economic ideology of America.  But the UCOP materials show why tuition hikes can't happen either.

Here's a graphic from the Special Committee on Basic Needs. It shows total cost of attending a UC as a bit over $35,000 a year, broken down by sources of funding.
A lot of grant money is being spent, and yet the dire truth is the dark grey band hovering over every income level.  Every student, including the poorest, has to come up with close to $10,000 of $35,000 of total cost.  If your family makes $25,000 a year, you get a Cal Grant, a Pell Grant, and a UC Grant, and you still need to borrow or earn $10,000.  This is a key reason why the "high tuition/ high aid" model isn't sustainable. (See Stage 5 in The Great Mistake for details, or this post, written back when the claim that "high aid" induced shortfalls for poor students caused angry cognitive dissonance).

It's a key reason why UC's business model has created conditions for student non-success.   Students cover the $10,000 (plus also, in many or most cases, much of the Expected Family Contribution), by working too much, living in bad housing or no housing, and not eating enough. One of my colleagues reports that at UC Santa Barbara, a relatively affluent UC, 48 percent of our undergraduates and 31 percent of our graduate students are food insecure.

Student work helps add to time to degree, which conflicts with UCOP's degree plans.  Hunger and homelessness conflict with fundamental ethical principles and also with degree plans.   All three can be fixed with money: buying out the "student work and loan" portion for all undergrads would cost over $2 billion.  Doing it for the nearly half of UC undergrads that are Pell eligible would cost over $1 billion.

Unfortunately the Basic Needs recommendations (page 5-7) are about everything except money. They won't make any real difference.  The Berkeley Faculty Association has called out the most clueless--the "financial wellness programs" under the recommendation for "Improving Financial Literacy."  To suggest that low- or middle-income students can't easily find another $10,000 because they don't understand credit card interest is absurd and offensive.  If you are interested in that sort of thing, read the Basic Needs minutes for July 16th.  There's a lot of fuss starting on page 8 about the funding for the study of the issue and for some programs: the total seems to be about $15 million.  It's less than 0.5 percent of UC tuition revenues, which suggests an equally minuscule commitment to material solutions to the problem.

I know my tenure-track colleagues have mostly given up on the state.  Most of us have hunkered down until tuition hikes and non-resident students start flowing again.  We are more involved than ever in local revenue prospects like fundraising and special fee master's programs (or SSPs/SSDPs).  I don't think it's ethical to give up--we are already too complicit with the suffering of too many students. I also don't think it's feasible to give up--ten years after the financial crisis we still have no fiscal exit. It's not unlike health care, where staying in half-way ACA limbo forces our friends and neighbors to come up with $10,000 or $20,000 a month to alleviate immediate misery. None of this is necessary.  Fixing UC, CSU and the CCs would cost $66 per median taxpayer per year.  It's a choice between suffering and spending our common money.

UPDATE (September 21) The Regents Finance and Strategies Committee discussion on September 18th included this chart (at around 2'14").  UCOP's David Alcocer nicely explains to Regent Park  that the figures are cumulative and reflect permanent revenues.  The way to get $20 million in permanent revenues is through an endowment 20 times that large (that yields 5% returns per year), that is, with a $400 million endowment.  I'd hasten to add what the regents are unlikely to infer--that fundraising is a very hard and inefficient way to generate large permanent revenue streams. 


I estimated "chunks" of additional money per year to be $300 million (state and tuition), but it's closer to $250 million here.

Also worth pondering is Regent Cohen's remarks at 2'24" that rejects UCOP's data showing per-student funding has fallen.  He first says Cal Grant money was left out, and when he's corrected on that, he says he doesn't like picking 2000-01 as a baseline, claiming that was unsustainable.  He also claims that the lower per-student spending is the effect of "efficiencies," which he states are being disregarded by Display 3 (above).  Cohen was Jerry Brown's director of finance, and seems to convinced himself that no cuts were made to educational operations because all reductions were efficiencies. He of course offers no evidence for this, and in fact there isn't any. The UCOP representatives don't correct Cohen's assertion, and this disastrous misreading of UC campus budgeting is probably not unique to him on the board.

Cohen's interpretation is one that UCOP and also the Senate should correct head-on, based on its deep experience with what has actually happened on the campuses over the past ten years, to say nothing of the effects of previous cut cycles.

New Mexico pioneers free college
Previously on Newsom and UC budgets
Photo credit



Saturday, September 7, 2019

Saturday, September 7, 2019
The rule of infrastructure is that no one thinks about it until it breaks.  This week, I was at the annual conference of the International Society for Scientometrics and Informetrics when I bumped into an example of how the massive flow of bibliometric data can suddenly erupt into the middle of a university's life."

Washington University in St. Louis (WashU) has a new chancellor.  He hired a consulting firm to interview and survey the university community about its hopes, priorities, views of WashU's culture, and desired cuts.  The firm found a combination of hope and "restlessness," which the report summarized like this: members of the WashU community
want to see a unifying theme and shared priorities among the various units of the university. Naturally, stakeholders want to see WashU rise in global rankings and increase its prestige, but they want to see the institution accomplish this in a way that is distinctively WashU. They are passionate about WashU claiming its own niche. Striving to be “the Ivy of the Midwest” is not inspiring and lacks creativity. Students feel the university is hesitant to do things until peer institutions make similar moves.
"As always, the university needs to become more unique and yet more unified, and to stop following others while better following the rankings.

The report might have gone on to overcome these tensions by organizing its data into a set of proposals for novel research and teaching areas.  Maybe someone in the medical school suggested a cross-departmental initiative on racism as a socially-transmitted disease. Maybe the chairs of the Departments of Economics and of Women, Gender, and Sexuality Studies mentioned teaming up to reinvent public housing with the goal of freeing quality of life from asset ownership.  These kinds of ideas regularly occur on university campuses, but are rarely funded.

New proposals is not what the report has to offer. It generates lists of the broad subject areas that every university is also pursuing (pp 4-5). It embeds them in this finding:
The other bold call to action that emerged from a subset of interviewees is internally focused. This subset tended to include new faculty and staff . . . and Medical School faculty and senior staff (who perceive their [medical] campus enforces notably higher productivity standards). These interviewees are alarmed at what they perceive as the pervading culture among faculty on the Danforth Campus [the main university]. They hope the new administration has the courage to tackle faculty productivity and accountability. They are frustrated by perceived deficiencies in research productivity, scholarship expectations and teaching quality. A frequently cited statistic was the sub-100 ranking of WashU research funding if the Medical School is excluded. Those frustrated with the Danforth faculty feel department chairs don’t hold their faculty accountable. There is too much “complacency” and acceptance of “mediocrity.” “There is not a culture of excellence.” . . . Interviewees recognize that rooting out this issue will be controversial and fraught with risk. However, they believe it stands as the primary obstacle to elevating the Danforth Campus –and the university as a whole –to elite status. 
Abstracting key elements gets this story: One group has a pre-existing negative belief about another group.  They think the other group is inferior to them. They also believe that they are damaged by the other's inferiority.  They offer a single piece of evidence to justify this sense of superiority. They also say the other group's leaders are solely responsible for the problem.  They have theory of why: chairs apply insufficient discipline. They attribute the group's alleged inferiority to every member of that group. 

Stripped down like this, this part of the report is boilerplate bigotry.  Every intergroup hostility offers some self-evident "proof" of its validity.  In academia's current metrics culture, the numerical quality of an indicator supposedly cleanses it of prejudice.  Lower research expenditures is just a fact, like the numbers of wins and losses that create innocent rankings like baseball standings.  So, in our culture, the med school can look down on the Danforth Campus with impunity because it has an apparently objective number--relative quantities of research funding.

In reality, this is a junk metric.  I'll count some of the ways: 
  • the belief precedes the indicator, which is cherry-picked from what would be a massive set of possible indicators that inevitably tells a complicated story.  (A better consultant would have conducted actual institutional research, and would never have let surveyed opinions float free of a meaningful empirical base.) 
  • the indicator is bound to Theory X, the a priori view that even advanced professionals “need to be coerced, controlled, directed, and threatened with punishment to get them to put forward adequate effort" (we've discussed Theory X vs. Theory Y here and here). 
  • quantity is equated with quality. This doesn't work--unless there's a sophisticated hermeneutical project thatt goes with it.  It doesn't work with citation counts (which assume the best article is the one with the most citations from the most cited journals), and its use has been widely critiqued in the scientometric literature (just one example). Quantity-is-quality really doesn't work with money, when you equate the best fields with the most expensive ones. 
  • the metric is caused by a feature of the environment rather than solely by the source under study. The life sciences get about 57 percent of all federal research funding, and the lion's share of that runs through NIH rather than NSF, meaning through health sciences more than academic biology. Thus every university with a medical school gets the bulk of its R&D funding through that medical school; note medical campuses dominating R&D expenditure rankings, and see STEM powerhouse UC Berkeley place behind the University of Texas's cancer center. (Hangdog WashU is basically tied with Berkeley.)
  • the report arbitrarily assumes only one of multiple interpretations of the metric. An alternative interpretation here is (1) the data were not disaggregated to compare similar departments only, rather than comparing the apple of a medical school to the orange of a general campus (with departments of music, art history, political science, etc.)  Another is (2) the funding gap reflects the broader mission of arts and sciences departments, in which faculty are paid to spend most of their time on teaching, advising, and mentoring.  Another is (3) the funding gap reflects the absurd underfunding of most non-medical research, from environmental science to sociology.  That's just three of many.
  • the metric divides people or groups by replacing relationships with a hierarchy. 
  • This last one is a subtle but pervasive effect that we don't understand very well.  Rankings make the majority of a group feel badly that they are not at the top. How much damage does this do to research, if we reject Theory X and see research as a cooperative endeavor depending on circuits of intelligence?  Professions depend on a sense of complementarity among different types of people and expertise--she's really good running the regressions, he's really good with specifications of appropriate theory, etc. The process of "ordinalizing" difference, as the sociologist Marion Fourcade puts it, discredits or demotes one of the parties and can this spoil professional interaction.  Difference becomes inferiority.  In other words, when used like this, metrics weaken professional ties in an attempt to manage efficiency.

So if Washington University takes these med school claims literally as fact, and doesn't scramble to see them as expressions of a cultural divide that must be fixed, the faulty metric just killed their planning process.

Let's take a step back from WashU.  The passage I've cited does in fact violate core principles of  professional bibliometricists. They reject these kinds of "simple, objective" numbers and their use them as a case-closed argument.  Recent statements of principle all demand that numbers be used only in the context of qualitative professional judgment: see DORA, Metric Tide, Leiden, and the draft of the new Hong Kong manifesto. It's also wrong that STEM professional organizations are all on board with quantitative research performance managment. Referring to the basic rationale for bibliometrics, "that citation statistics are inherently more accurate because the substitute simple numbers for complex judgements"--it was the International Mathematicians Union that in 2008 called this view "unfounded" in the course of a sweeping critique of the statistical methods behind Journal Impact Factor, the h-index, and other popular performance indicators. These and others have been widely debated and at least partially discredited, as in this graphic from the Leiden Manifesto:

The Leiden and Hong Kong statements demand that those evaluated be able to "verify data and analysis."  This means that use, methods, goals, and results should be reviewable and also rejectable where flaws are found.  All bibliometricists insist that metrics not be taken from one discipline and applied to another, since meaningful patterns vary from field to field.  Most agree that arts and humanities fields are disserved by them. In the U.S., new expectations for open data and strictly contextualed use were created by the Rutgers University faculty review of the then-secret use of Academic Analytics.
The best practitioners know that the problems with metrics are deep. In a Nature article last May,  Paul Wouters, one of the authors of the Leiden manifesto, wrote with colleagues,
    Indicators, once adopted for any type of evaluation, have a tendency to warp practice5. Destructive ‘thinking with indicators’ (that is, choosing research questions that are likely to generate favourable metrics, rather than selecting topics for interest and importance) is becoming a driving force of research activities themselves. It discourages work that will not count towards that indicator. Incentives to optimize a single indicator can distort how research is planned, executed and communicated.
    In short, indicators founder over Goodheart's Law (308), which I paraphrase as, "a measure used as a target is no longer a good measure."  Thus the Leiden manifesto supports the (indeed interesting and valuable) information contained in numerical indicators while saying they should be subordinated to collective practices of judgment.
     
    Given widespread reform efforts, including his own, why, in May, did Wouters lead-author a call in Nature to fix bad journal metrics with still more metrics, this time measuring at least five sub-components of every article?  Why does Michael Power's dark 1990s prediction in The Audit Society still hold: failed audit creates more audit?  Why are comments like those in the WashU report so common, and so powerful in academic policy? Why is there a large academic market for services like Academic Analytics, which sells ranking dashboards to administrators precisely so they can skip the contextual detail that would make them valid? Why is the WashU use of one junk number so typical, normal, common, invalid, and silencing? What do we do given that we can't criticize one misuse at a time, particularly when there's so much interest in discrediting an opposition with them?

    One clue emerged in a book I reviewed last year, Jerry Z. Mueller's The Tyranny of Metrics. Mueller is an historian, and an outsider to the evaluation and assessment practices he reviewed.  He decided to look at how indicators are used in a range of sectors -- medicine, K-12 education, the corporation, the military, etc.--and to ask whether there's evidence that metrics cause improvements of quality. Mueller generates a list of 11 problems with metrics that most practitioners would agree with.  Most importantly, while they emerged when metrics were used for audit and accountability, they were less of a problem when used by professionals within their own communities.  Here are a couple of paragraphs from that review:
    Muller’s only causal success story, in which metrics directly improve outcomes, is the Geisinger Health System, which uses metrics internally for improvement. There ‘the metrics of performance are neither imposed nor evaluated from above by administrators devoid of firsthand knowledge. They are based on collaboration and peer review’. He quotes the CEO at the time claiming, ‘Our new care pathways were effective because they were led by physicians, enabled by real‐time data‐based feedback, and primary focused on improving the quality of patient care’ (111). At Geisinger, physicians ‘who actually work in the service lines themselves chose which care processes to change’.
    If we extrapolate from this example, it appears that metrics causally improve performance only when they are (1) routed through professional (not managerial) expertise, as (2) applied by people directly involved in delivering the service, who are (3) guided by nonpecuniary motivation (to improve patient benefits rather than receive a salary bonus) and (4) possessed of enough autonomy to steer treatment with professional judgment.
    I'd be interested to know how the bibliometrics community would feel about limiting the use of metrics to internal information about performance with these four conditions.  Such a limit would certainly have helped the WashU case, since the metric of research expenditures could be discussed only within a community of common practice, and not applied by one (med school) group to another (Danforth Campus) in demanding accountability.

    Another historian, John Carson, gave a keynote address at the ISSI conference that discussed the historical relation between quantification and scientific racism, calling for "epistemic modesty" in our application of these techniques.  I agree.  Though I can't discuss it here, I also hope we can confront our atavistic association of quality with ranking, and of brilliance with a small elite.  The scale of the problems we face demands it.

    In the meantime, don't let someone use a metric you know is junk until it isn't.