• Home
  • About Us
  • Guest Posts

Wednesday, September 18, 2019

Wednesday, September 18, 2019
If you work at the University of California, your Office of the President has committed you to producing 200,000 additional degrees by 2030, on top of the one million degrees already expected.  This post is a plea to the hundreds of thousands of faculty, staff, and students who will implement this 20 percent increase.  It's a plea to analyze the material conditions this increase requires, and to work actively for the right ones.

Can this expansion really happen without creating degree-mill conditions, and making life even harder for more vulnerable students?  What new resources will it take to make 1.2 million degrees a great thing for students, for the state, and for UC research?  Materials for this week's Board of Regents meetings offer some clues.

The stakes are high because of the high cost of making unfulfillable promises about core social needs (health, education, housing, work).  Health care is Exhibit A: former Obama administration officials (like the Crooked Media crew) say they all saw the Affordable Care Act as a big step towards what many of them wanted but couldn't yet get from Congress, which was Medicare for All.  But the ACA's compromised design created widespread user disappointments.  These weakened political support both for the ACA and for Medicare for All, making it harder to protect the first and get the second.  Obama officials also spent a lot of time denying that they even wanted Medicare for All ("single payer" as it was often called), since they were afraid of the charges of socialized medicine that they of course got anyway, so they buried their core framing principle (call it equal access to a human right free of market allocation by ability to pay).  People did fight to keep their government backstop on health premiums, but ten years later Medicare for All is still a ways out of reach.

An analogy in higher ed is the effect of overcrowding on undergraduate satisfaction.   The University of California has been producing extra degrees by taking extra students, half or more without state payment, off and on for 15 years.  For the recent history, see the very useful Regents' item F11, Display 1. You might think this would earn UC budget chips it could cash in for state general funds later, but there's no evidence that this has ever happened. If anything, taking unfunded students teaches the state UC can make do with less money per student, and perhaps even zero.  This is a bad precedent for the extra 200,000 students to come.

UC campuses see overcrowding as a tacit and necessary revenue strategy: even those students who don't bring state money still pay tuition.  Item F11 notes statistical costs: "the number of students per ladder-rank and equivalent faculty member, which has grown from fewer than 25 in 2004-05 to more than 28 in 2017-18" (p 5).  Student to core staff ratios have also risen, from 11.5 in 2007-08 to 15.6 students per staff member ten years later.  Ratios of students to frontline staff are in my experience grossly higher. 15.6 may reflect the number of RAs whose payroll is handled by a research center budget officer.  A departmental academic advisor's ratio may be 200:1, 500:1 or 1200:1.

The "user" cost appears in survey data.
Compared to 2006,
  • students are much less likely to strongly agree with the statement,  "Knowing what I know now, I would still choose to enroll at my UC campus";
  • a declining percentage of students are able to get into their first-choice major; and
  • students are less likely to know at least one professor well enough to ask for a letter of recommendation.
Whoever wrote Item F11 chose fundamental issues very well.  To what extent does UC enrollment depend on ignorance of UC realities?  Choosing a major is a cornerstone of the U.S. higher ed system: how many UC students are forced into a second or third choice? (Do read Zach Bleemer on the cost of being forced out of a first choice major.)  Finally, and rudely, is contact time with professors so much greater at public universities than at online services?  UC compares itself to private university peers in terms of research quality and faculty salaries. These three items are key private college strengths that UC has for many years been unable to match.  

What new funding is UC requesting to redress these issues? It's the standard modest proposal: tuition increases beaten back to the rate of inflation (there's a cohort-based tuition plan that's getting student attention).  A 3-4 percentish increase in state funds.  Throw in some non-resident tuition increases and some other little stuff.  Given current baselines in state funds and tuition revenues (page 14),  that would mean annual combined increases in chunks of $300 million, one chunk per normal year.

Here's a picture of the background:

Instructional expenditures are 80 percent of what they were in 2000-01 (actually less than that, because capital costs, bond interest, and pension contributions now come out of state general funds; averages are also much higher than undergrads will experience in most majors; but never mind that here).  This is true even though net tuition has doubled in that time (and tripled since 1990).  The state has also doubled the share of tuition that it picks up through Cal Grants, which is a political sore point.  Everybody's unhappy enough with the status quo to fail to give the university their strong support.

What would make people happy? Let's ballpark this.  Say average spending of 2000-01 levels would allow hiring more faculty and staff, easing restrictions on student access to faculty and to first-choice majors and upgrading the overall learning experience. We'd need another $5000 per student, all of it from the state to avoid tuition increases.  On 230,000 or so undergraduates, that means around $1.1 billion on top of inflation-covering increases we get right now.   If you did it in one year to avoid various complications, that would be a one-year increase of about 30 percent in general funding, or ten times the typical increase of recent years.

And if we need to produce 200,000 extra degrees, that's an additional 20,000 per year each year.  Funding another 20,000 students at the full cost of $25,000 means another $500 million on top of our $1.1 billion.   These are crude, round numbers, but they are in the ballpark of the real costs of doing the better quality at the bigger scale to which UCOP has in fact committed us.

It's hard for us to imagine the state stepping up like this.  That thought turns the gaze to tuition, which was the regents' answer (7-10 percent annual tuition increases were in the 2005 Compact with the state) until Jerry Brown shut it down.  It's hard not to go to tuition when it's the solution built into the  political and economic ideology of America.  But the UCOP materials show why tuition hikes can't happen either.

Here's a graphic from the Special Committee on Basic Needs. It shows total cost of attending a UC as a bit over $35,000 a year, broken down by sources of funding.
A lot of grant money is being spent, and yet the dire truth is the dark grey band hovering over every income level.  Every student, including the poorest, has to come up with close to $10,000 of $35,000 of total cost.  If your family makes $25,000 a year, you get a Cal Grant, a Pell Grant, and a UC Grant, and you still need to borrow or earn $10,000.  This is a key reason why the "high tuition/ high aid" model isn't sustainable. (See Stage 5 in The Great Mistake for details, or this post, written back when the claim that "high aid" induced shortfalls for poor students caused angry cognitive dissonance).

It's a key reason why UC's business model has created conditions for student non-success.   Students cover the $10,000 (plus also, in many or most cases, much of the Expected Family Contribution), by working too much, living in bad housing or no housing, and not eating enough. One of my colleagues reports that at UC Santa Barbara, a relatively affluent UC, 48 percent of our undergraduates and 31 percent of our graduate students are food insecure.

Student work helps add to time to degree, which conflicts with UCOP's degree plans.  Hunger and homelessness conflict with fundamental ethical principles and also with degree plans.   All three can be fixed with money: buying out the "student work and loan" portion for all undergrads would cost over $2 billion.  Doing it for the nearly half of UC undergrads that are Pell eligible would cost over $1 billion.

Unfortunately the Basic Needs recommendations (page 5-7) are about everything except money. They won't make any real difference.  The Berkeley Faculty Association has called out the most clueless--the "financial wellness programs" under the recommendation for "Improving Financial Literacy."  To suggest that low- or middle-income students can't easily find another $10,000 because they don't understand credit card interest is absurd and offensive.  If you are interested in that sort of thing, read the Basic Needs minutes for July 16th.  There's a lot of fuss starting on page 8 about the funding for the study of the issue and for some programs: the total seems to be about $15 million.  It's less than 0.5 percent of UC tuition revenues, which suggests an equally minuscule commitment to material solutions to the problem.

I know my tenure-track colleagues have mostly given up on the state.  Most of us have hunkered down until tuition hikes and non-resident students start flowing again.  We are more involved than ever in local revenue prospects like fundraising and special fee master's programs (or SSPs/SSDPs).  I don't think it's ethical to give up--we are already too complicit with the suffering of too many students. I also don't think it's feasible to give up--ten years after the financial crisis we still have no fiscal exit. It's not unlike health care, where staying in half-way ACA limbo forces our friends and neighbors to come up with $10,000 or $20,000 a month to alleviate immediate misery. None of this is necessary.  Fixing UC, CSU and the CCs would cost $66 per median taxpayer per year.  It's a choice between suffering and spending our common money.

New Mexico pioneers free college
Previously on Newsom and UC budgets
Photo credit



Saturday, September 7, 2019

Saturday, September 7, 2019
The rule of infrastructure is that no one thinks about it until it breaks.  This week, I was at the annual conference of the International Society for Scientometrics and Informetrics when I bumped into an example of how the massive flow of bibliometric data can suddenly erupt into the middle of a university's life.">Washington University in St. Louis (WashU) has a new chancellor.  He hired a consulting firm to interview and survey the university community about its hopes, priorities, views of WashU's culture, and desired cuts.  The firm found a combination of hope and "restlessness," which the report summarized like this: members of the WashU community
want to see a unifying theme and shared priorities among the various units of the university. Naturally, stakeholders want to see WashU rise in global rankings and increase its prestige, but they want to see the institution accomplish this in a way that is distinctively WashU. They are passionate about WashU claiming its own niche. Striving to be “the Ivy of the Midwest” is not inspiring and lacks creativity. Students feel the university is hesitant to do things until peer institutions make similar moves.
"As always, the university needs to become more unique and yet more unified, and to stop following others while better following the rankings.

The report might have gone on to overcome these tensions by organizing its data into a set of proposals for novel research and teaching areas.  Maybe someone in the medical school suggested a cross-departmental initiative on racism as a socially-transmitted disease. Maybe the chairs of the Departments of Economics and of Women, Gender, and Sexuality Studies mentioned teaming up to reinvent public housing with the goal of freeing quality of life from asset ownership.  These kinds of ideas regularly occur on university campuses, but are rarely funded.

New proposals is not what the report has to offer. It generates lists of the broad subject areas that every university is also pursuing (pp 4-5). It embeds them in this finding:
The other bold call to action that emerged from a subset of interviewees is internally focused. This subset tended to include new faculty and staff . . . and Medical School faculty and senior staff (who perceive their [medical] campus enforces notably higher productivity standards). These interviewees are alarmed at what they perceive as the pervading culture among faculty on the Danforth Campus [the main university]. They hope the new administration has the courage to tackle faculty productivity and accountability. They are frustrated by perceived deficiencies in research productivity, scholarship expectations and teaching quality. A frequently cited statistic was the sub-100 ranking of WashU research funding if the Medical School is excluded. Those frustrated with the Danforth faculty feel department chairs don’t hold their faculty accountable. There is too much “complacency” and acceptance of “mediocrity.” “There is not a culture of excellence.” . . . Interviewees recognize that rooting out this issue will be controversial and fraught with risk. However, they believe it stands as the primary obstacle to elevating the Danforth Campus –and the university as a whole –to elite status. 
Abstracting key elements gets this story: One group has a pre-existing negative belief about another group.  They think the other group is inferior to them. They also believe that they are damaged by the other's inferiority.  They offer a single piece of evidence to justify this sense of superiority. They also say the other group's leaders are solely responsible for the problem.  They have theory of why: chairs apply insufficient discipline. They attribute the group's alleged inferiority to every member of that group. 

Stripped down like this, this part of the report is boilerplate bigotry.  Every intergroup hostility offers some self-evident "proof" of its validity.  In academia's current metrics culture, the numerical quality of an indicator supposedly cleanses it of prejudice.  Lower research expenditures is just a fact, like the numbers of wins and losses that create innocent rankings like baseball standings.  So, in our culture, the med school can look down on the Danforth Campus with impunity because it has an apparently objective number--relative quantities of research funding.

In reality, this is a junk metric.  I'll count some of the ways: 
  • the belief precedes the indicator, which is cherry-picked from what would be a massive set of possible indicators that inevitably tells a complicated story.  (A better consultant would have conducted actual institutional research, and would never have let surveyed opinions float free of a meaningful empirical base.) 
  • the indicator is bound to Theory X, the a priori view that even advanced professionals “need to be coerced, controlled, directed, and threatened with punishment to get them to put forward adequate effort" (we've discussed Theory X vs. Theory Y here and here). 
  • quantity is equated with quality. This doesn't work--unless there's a sophisticated hermeneutical project thatt goes with it.  It doesn't work with citation counts (which assume the best article is the one with the most citations from the most cited journals), and its use has been widely critiqued in the scientometric literature (just one example). Quantity-is-quality really doesn't work with money, when you equate the best fields with the most expensive ones. 
  • the metric is caused by a feature of the environment rather than solely by the source under study. The life sciences get about 57 percent of all federal research funding, and the lion's share of that runs through NIH rather than NSF, meaning through health sciences more than academic biology. Thus every university with a medical school gets the bulk of its R&D funding through that medical school; note medical campuses dominating R&D expenditure rankings, and see STEM powerhouse UC Berkeley place behind the University of Texas's cancer center. (Hangdog WashU is basically tied with Berkeley.)
  • the report arbitrarily assumes only one of multiple interpretations of the metric. An alternative interpretation here is (1) the data were not disaggregated to compare similar departments only, rather than comparing the apple of a medical school to the orange of a general campus (with departments of music, art history, political science, etc.)  Another is (2) the funding gap reflects the broader mission of arts and sciences departments, in which faculty are paid to spend most of their time on teaching, advising, and mentoring.  Another is (3) the funding gap reflects the absurd underfunding of most non-medical research, from environmental science to sociology.  That's just three of many.
  • the metric divides people or groups by replacing relationships with a hierarchy. 
  • This last one is a subtle but pervasive effect that we don't understand very well.  Rankings make the majority of a group feel badly that they are not at the top. How much damage does this do to research, if we reject Theory X and see research as a cooperative endeavor depending on circuits of intelligence?  Professions depend on a sense of complementarity among different types of people and expertise--she's really good running the regressions, he's really good with specifications of appropriate theory, etc. The process of "ordinalizing" difference, as the sociologist Marion Fourcade puts it, discredits or demotes one of the parties and can this spoil professional interaction.  Difference becomes inferiority.  In other words, when used like this, metrics weaken professional ties in an attempt to manage efficiency.

So if Washington University takes these med school claims literally as fact, and doesn't scramble to see them as expressions of a cultural divide that must be fixed, the faulty metric just killed their planning process.

Let's take a step back from WashU.  The passage I've cited does in fact violate core principles of  professional bibliometricists. They reject these kinds of "simple, objective" numbers and their use them as a case-closed argument.  Recent statements of principle all demand that numbers be used only in the context of qualitative professional judgment: see DORA, Metric Tide, Leiden, and the draft of the new Hong Kong manifesto. It's also wrong that STEM professional organizations are all on board with quantitative research performance managment. Referring to the basic rationale for bibliometrics, "that citation statistics are inherently more accurate because the substitute simple numbers for complex judgements"--it was the International Mathematicians Union that in 2008 called this view "unfounded" in the course of a sweeping critique of the statistical methods behind Journal Impact Factor, the h-index, and other popular performance indicators. These and others have been widely debated and at least partially discredited, as in this graphic from the Leiden Manifesto:

The Leiden and Hong Kong statements demand that those evaluated be able to "verify data and analysis."  This means that use, methods, goals, and results should be reviewable and also rejectable where flaws are found.  All bibliometricists insist that metrics not be taken from one discipline and applied to another, since meaningful patterns vary from field to field.  Most agree that arts and humanities fields are disserved by them. In the U.S., new expectations for open data and strictly contextualed use were created by the Rutgers University faculty review of the then-secret use of Academic Analytics.
The best practitioners know that the problems with metrics are deep. In a Nature article last May,  Paul Wouters, one of the authors of the Leiden manifesto, wrote with colleagues,
    Indicators, once adopted for any type of evaluation, have a tendency to warp practice5. Destructive ‘thinking with indicators’ (that is, choosing research questions that are likely to generate favourable metrics, rather than selecting topics for interest and importance) is becoming a driving force of research activities themselves. It discourages work that will not count towards that indicator. Incentives to optimize a single indicator can distort how research is planned, executed and communicated.
    In short, indicators founder over Goodheart's Law (308), which I paraphrase as, "a measure used as a target is no longer a good measure."  Thus the Leiden manifesto supports the (indeed interesting and valuable) information contained in numerical indicators while saying they should be subordinated to collective practices of judgment.
     
    Given widespread reform efforts, including his own, why, in May, did Wouters lead-author a call in Nature to fix bad journal metrics with still more metrics, this time measuring at least five sub-components of every article?  Why does Michael Power's dark 1990s prediction in The Audit Society still hold: failed audit creates more audit?  Why are comments like those in the WashU report so common, and so powerful in academic policy? Why is there a large academic market for services like Academic Analytics, which sells ranking dashboards to administrators precisely so they can skip the contextual detail that would make them valid? Why is the WashU use of one junk number so typical, normal, common, invalid, and silencing? What do we do given that we can't criticize one misuse at a time, particularly when there's so much interest in discrediting an opposition with them?

    One clue emerged in a book I reviewed last year, Jerry Z. Mueller's The Tyranny of Metrics. Mueller is an historian, and an outsider to the evaluation and assessment practices he reviewed.  He decided to look at how indicators are used in a range of sectors -- medicine, K-12 education, the corporation, the military, etc.--and to ask whether there's evidence that metrics cause improvements of quality. Mueller generates a list of 11 problems with metrics that most practitioners would agree with.  Most importantly, while they emerged when metrics were used for audit and accountability, they were less of a problem when used by professionals within their own communities.  Here are a couple of paragraphs from that review:
    Muller’s only causal success story, in which metrics directly improve outcomes, is the Geisinger Health System, which uses metrics internally for improvement. There ‘the metrics of performance are neither imposed nor evaluated from above by administrators devoid of firsthand knowledge. They are based on collaboration and peer review’. He quotes the CEO at the time claiming, ‘Our new care pathways were effective because they were led by physicians, enabled by real‐time data‐based feedback, and primary focused on improving the quality of patient care’ (111). At Geisinger, physicians ‘who actually work in the service lines themselves chose which care processes to change’.
    If we extrapolate from this example, it appears that metrics causally improve performance only when they are (1) routed through professional (not managerial) expertise, as (2) applied by people directly involved in delivering the service, who are (3) guided by nonpecuniary motivation (to improve patient benefits rather than receive a salary bonus) and (4) possessed of enough autonomy to steer treatment with professional judgment.
    I'd be interested to know how the bibliometrics community would feel about limiting the use of metrics to internal information about performance with these four conditions.  Such a limit would certainly have helped the WashU case, since the metric of research expenditures could be discussed only within a community of common practice, and not applied by one (med school) group to another (Danforth Campus) in demanding accountability.

    Another historian, John Carson, gave a keynote address at the ISSI conference that discussed the historical relation between quantification and scientific racism, calling for "epistemic modesty" in our application of these techniques.  I agree.  Though I can't discuss it here, I also hope we can confront our atavistic association of quality with ranking, and of brilliance with a small elite.  The scale of the problems we face demands it.

    In the meantime, don't let someone use a metric you know is junk until it isn't.