• Home
  • About Us
  • Guest Posts
Showing posts with label Research. Show all posts
Showing posts with label Research. Show all posts

Tuesday, May 13, 2025

Tuesday, May 13, 2025

 

San Francisco Bay on October 20, 2017   
Deadlines collide and last week I wrote three pieces, with a common preoccupation of “Reversing Helplessness,” the title of my ISRF Director’s Note for May. I’ll come back to this issue over the next few days.  


On Friday I was interviewed by email by Kathryn Palmer at Inside Higher Ed. The topic is the title of her ensuing article, Can Scientific Research Survive Without Federal Funding?  She quotes me a couple of times, and also covers the small but possibly growing trend of universities using their institutional funds as “bridge funds” for faculty who have had grants arbitrarily and capriciously frozen while still having staff to pay, cells to keep alive, and the like. 


I’ll discuss bridge funding in my next post: it’s a patch with problems. 


Below I’ve posted my more general answers to her questions about university research. This frames what I expect to be a set of posts on the subject.  In California, the UC Regents are meeting and Governor Gavin Newsom will issue his May revision of his already bad university budget. Meanwhile: 


Kathryn Palmer (KP): Universities are taking from endowments and other reserve funding to fill gaps created by cuts to federal research funding. But are these long-term solutions to continuing research at the same levels as they've been accustomed to?


CN: A few universities could replace federal funds for a few years, but this won’t work for the research ecosystem that is the source of the strength of US research.


KP: Trump and his allies have suggested private industry could take over research. Or perhaps philanthropy (see this article about the Bill Gates Foundation's plan to spend 200 billion before it closes in 2045). Is that plausible? Why or why not? What are the pros and cons of private entities funding scientific research vs. higher education institutions? 


CN: That won’t work. Most corporate research is development, not basic research.  Firms take fundamental concepts, materials, discoveries mostly uncovered in universities and apply them in product lines that they think have high future value in the marketplace. Usually this needs to be a near-term future—a couple of years.  Fundamental research takes place not over years but decades.  

 

Quantum computing is a good example.  People were excited about its commercial potential 20 years ago, and they’re still only somewhat closer.  This is a normal timeframe, which corporations cannot operate on.  Gates philanthropy is also directed at pet topics. The wealthy do the same.  The strength of US research has always been that it is researcher-driven, by the people who know best, not by moguls with special interests or axes to grind.  

 

There are some exceptions, especially in computer-science based tech where firms spend enormous amounts of money in very hot fields (AI is the one now), but only in expectation of near-term enormous (monopoly) profits and under conditions of panic competition.  This is the exception that proves the rule that basic research happens in universities, not companies.  

 

Some of the country’s most lucrative sectors, like pharmaceuticals, have depended on large government research subsidies for decades.  Trump’s model would ask them to take all those costs in house. This would drive up the price of medications to even higher levels, or cut company profits, and probably both at once.  

KP: Could state and local government play a role here? In what capacity? 

 

CN: Over time, states could take on research costs.  But the funding flows would have to change. A share of one's federal income tax would need to be redirected towards the state in which one lives. At the moment, their current tax structures won’t allow it, and states already fund many competing needs (K-12, public health provision, etc.)

 

As a Californian, I wouldn’t mind my state setting up an income tax impound account where I could redirect my federal tax to the state if they’d spend it on teaching, research, etc.—and let them fight it out with the feds in court.  This would be much harder for most states though, and increase inequalities across the country.  And it would happen only gradually, if ever.


KP: How might drastic cuts to research funding at universities reshape how research is done in the United States? Or is too soon to tell?


CN: If the impounding and cuts survive the courts and Congress, then the US will fall behind Europe and East Asia fairly quickly, within around five years. Research moves very quickly, even outside of STEM.  The creation of future researchers will shrink and collapse in some fields in many universities as doctoral programs become unsustainable.  

 

Advanced skills will be harder for businesses to find.  Quite a bit of undergraduate instruction won’t happen, or will be automated, or will be reallocated to academic researchers which will reduce their research.  The knock-on effects are very serious, especially for large, public universities where doctoral students cover a substantial share of teaching. 

 

Since the system’s parts are all interconnected, the general intellectual level of US higher education will deteriorate on a number of fronts at the same time. 


KP: Any other thoughts about this current moment for academic research and scientific discovery? 


CN: The Trump Administration has created the worst crisis in higher education in 100 years, if not in US history. They’ve induced an artificial Great Depression in higher ed.  Every single one of their rationales is wrong. Every one of their actions will make things worse for students, for research, and for the general public. There is no silver lining of reform here, for conservatives or anyone else. 


The people that Trump’s academic policy will disadvantage include residents who didn’t go to college and who don’t like them.  These people also benefit directly from the educated college masses and from the cultural and scientific knowledge universities create, of which they will now have less. People will figure this out when it is too late, as did the British Brexit voter.  

 

To keep this from happening, universities need to form a national bloc and fight this tooth and nail together. Otherwise, it will take them a full decade to recover when Trump is finished, if ever.  

 

Posted by Chris Newfield | Comments: 0

Tuesday, February 25, 2025

Tuesday, February 25, 2025

Outside Lafayette, La. on October 27, 2018
By Leslie Bary, University of Louisiana at Lafayette


I just wrote a mini-grant for $858, to cover flight and hotel for a speaker. To justify the choice of speaker and the validity of event, I composed a few hundred words, to explain to an audience out of field and possibly outside of academia why one invites speakers from other institutions to share their expertise. My speaker is a full professor and department chair at a major research institution. They are a noted scholar in our field of Latin American Studies. Their appropriateness as a speaker is not in the slightest doubt.
 

In the past, the $858 would have come out of a departmental speaker budget. I would not have to spend the afternoon explaining in words of one syllable why the event was being held and who the person was, nor creating documentation to prove I really had looked up and compared flight costs. But that was how I spent a lot of time today that would otherwise have been dedicated to research and teaching. 

I have been a professor for many years and before that, I was a graduate student with a teaching role. I have written many small internal grants. Initially, it was only one every couple of years, for special activities like summer research travel. Now almost every routine activity requires a mini-grant. The five-year vita I recently prepared listed ten in a category I now call “Selected Internal Funding.” A complete list would have crowded the document, since as departmental budgets shrink, funding requests for everyday operations are needed more and more often.

I have never been turned down for a funding request. Never. I suspect the reason is that the institution funds all legitimate proposals. I repeat, these grants are for amounts that in the past department chairs or deans would have controlled and would have simply authorized. They would do this not out of corruption or favoritism, but because they were familiar with the field and could exercise good judgment about it.

When I raise this issue, some faculty say they have given up writing mini-grants and only apply for major external grants. I am also a good writer of these, but major grants, at least in my field, do not fund everyday operations. And by major grants I mean grants from national research organizations like the NEH or ACLS. I do not mean fundraising. I also lobby civic organizations to support campus projects, but such fundraising covers different kinds of activities than do research grants to the Guggenheim Foundation. 

The mini-grants address needs not covered by other mechanisms. That is why I continue to apply. I do have some better paid and wealthier colleagues who dispense with the mini-grants and support university activities with personal funds, but they are few. Others take consulting gigs to substitute for the mini-grants, pointing out that if it takes five hours to write and then administer a mini-grant for $750, and they can raise $750 in three hours’ consulting, they’ll do the consulting. 

My research office suggests that applying for mini-grants helps us to reflect and articulate our research programs to ourselves. The fact is it doesn’t. Writing a book proposal or a major external grant can do that, just as updating and reformatting a vita can help rethink a career trajectory. But explaining basic things like why we go to conferences or, as I did for one mini-grant, why professors read books, does not help me clarify my ideas. At the outside, it might help explain what I am doing to an uninformed auditor. But that kind of explanation to such a person makes a negative contribution to my scholarly life.

The formulae for the mini-grants typically imitate those of major grants in the sciences, as does the idea that everything done should be grant funded. But in these fields, people spend as much as half of their work time applying for the funds they need to do their jobs. Rather than address that impractical situation, universities now replicate it at every level. The exercise seems particularly absurd when we are asked by our university to defend our job positions, or to explain that conducting research is part of our contract with them, and we are complying. 

But what is happening here? Every time there is a new, allegedly competitive, centralized internal funding opportunity, it is presented as new funding intended to help us, yet simultaneously, money disappears from regular departmental budgets and the regular library budget. A central committee reviews all the proposals, and individual units across campus lose autonomy. The university says this reduces “siloing.” In some cases it can be fairer since there are always people involved who do not know the applicants. But overall, it seems to be about a reduction in shared governance.

That is to say that every mini-grant application is a symptom of a department without a budget and, in the case of many of mine, a library without materials. When departments do not have budgets for research and libraries do not have them for materials, and faculty instead apply for funding to a mysterious committee in Academic Affairs, that committee has taken over functions that multiple department chairs, librarians, and others would have shared in the past. This is a concentration of power in a rather faceless group. Even if there were a Senate committee administering such things, the atmosphere would be less corporate. 

I note further that Human Resources nowadays is not a department of my university, but a service we have outsourced to a corporate “partner.” People who have increasing power over us are not colleagues or university employees. I wonder when the same will happen to the committee that judges the mini-grants.

What should be happening instead? Universities should restore department budgets for routine scholarly activities that are central to university education, central for undergraduate students as much as for everyone else. This would increase the use of decentralized academic expertise, lodged in departments, which would in turn increase the efficiency of the overall system. And it would reduce the excessive administrative labor of the many, many scholars in my position.


University of Louisiana, Lafayette on October 25, 2018


Posted by Chris Newfield | Comments: 0

Tuesday, December 6, 2022

Tuesday, December 6, 2022

The Strike continues with no end in sight.  Although there have been tentative agreements concerning Post-Docs and Academic Researchers, in the Academic Student Employee and Student Researcher units, the parties appear to remain well apart on the fundamental economic issues.  This distance is most easily seen in the ASE category: although the UAW made significant adjustments in its proposal UC responded with little change.  You can see the latest UAW wage proposal here and the latest UC wage proposal here.  

It is impossible from the outside to tell where the negotiations are headed.  But what I want to try to do here is offer some suggestions for how we could think about the gap, how we got here, and what we might do in the future to alter the conditions that have created what is undoubtedly a crisis at the University, and a depressing foreshadowing of the end of UC as a serious research university.  If the latter does happen the responsibility will ultimately lie with UCOP and the Regents with some support from the campus Chancellors.

The first point is that it seems clear that there is a fundamental gap in the way that each side is defining these negotiations.  UC is approaching this as if it were a conventional labor negotiation with a class of workers whose position is fundamentally stable.  The UAW and its supporters on the other hand, start from the position that they have been placed in an untenable economic position.  Given the fact that TA wages have barely kept up with national inflation over the years combined with the extreme cost of housing in California, they cannot continue with relatively minor adjustments in the dollar amount of their monthly pay.  To make matters worse, UC's latest offer has a first year adjustment that is about equal to current inflation.  In this light, UCOP appears completely out of touch with the reality of life on campuses and indifferent to its lack of knowledge.

This image of autocratic disregard was only deepened by Provost Brown's appalling letter to the faculty last week.  Although much of it was standard UCOP pablum, he inspired widespread faculty hostility with his closing flourish threatening faculty members who refused to pick up the work of striking workers with discipline beyond the docking of pay.  For the last three years, faculty and lecturers  have performed an enormous amount of additional labor to keep the university afloat during the pandemic: transforming their courses, spending additional time with students, planning for campus transformations, and putting their research duties on the side to maintain "instructional continuity" as the administration likes to put it.  After all this effort, for the Provost to threaten disciplinary action for those who choose not to pick up the work of striking TAs or to act upon their own convictions about academic integrity, manifests a contempt for the faculty that is hard to ignore.

It's important to grasp UC's budgetary situation correctly.  Most importantly, the usual invocation of the university's 46 billion dollar budget needs to be put aside.  Most of that budget is tied up in the medical centers or in funding for designated purposes.  The real budget that is relevant is the core budget made up of tuition, state funding, and some UC funds.  It comes in closer to $10 billion (Display 1) and is largely tied up in salaries across the campuses.  As Chris and I have been pointing out for nearly 15 years, UC has been subject to core educational austerity surrounded by compartmentalized privatized wealth (although we should notice that the medical centers barely stay in the black).  This crisis will not be overcome by hidden caches of money floating around the university.  The problem is deeper than that:  its roots lie in the combination of state underfunding and the expansion of expensive non-instructional (often non-academic research) activities that have taken up too much of campus's payrolls.

But I want to stress that this reality does not mean that the graduate students are being unreasonable in seeking wages that enable them to perform their employment duties and pursue their studies.  Instead, it is a sign of how deep the failure of the University has been in (not) providing a sustainable funding model both for students and faculty supporting students.  The Academic Senate has been pointing to this problem for at least two decades.  In statements and reports from 2006, 2012, and 2020, the Senate has repeatedly insisted that graduate student support was insufficient and proposed steps to improve it.  Even the administration itself has sometimes recognized its depth.  To take only one example from 2019, UCOP's Academic Planning Council declared that:  

UC must do better at financially supporting its doctoral students, particularly as it seeks to diversify the graduate student body. The University cannot compete with its peers for talented candidates if it does not offer competitive support. In 2017 the gap in average net stipend between UC and its peers was nominally $680.3 In actuality the gap is much greater due to California’s high cost of living - with factored in, the average gap in doctoral support is closer to $3,400.4 This is a huge difference but not insurmountable. The Workgroup urges UC leadership to make every effort to close the gap so that the quality of UC’s doctoral programs is maintained and enhanced.

UC campuses, with planning and prioritization, could guarantee five-year multi-year funding to doctoral students upon admission. According to current data, about 77 percent of doctoral students across UC receive stable or increasing net stipends for five consecutive years.5 (Appendix 1.) With some exceptions, this multi-year funding is relatively consistent across campuses and disciplines. However, this funding is typically not presented as a full five-year multi-year guaranteed package upon admission. Offering five-year funding upon admission would enhance recruitment of high-potential students, offer financial security, and address one of the chief stressors for doctoral students - worry over continued funding while in the program.

In addition to offering guaranteed five-year funding, the University must address the issue of graduate student housing. Graduate students, many of whom have family responsibilities, face enormous challenges in finding affordable housing. Without a targeted effort to address graduate student housing, UC’s capacity to attract and retain qualified candidates is at serious risk.  (4-5)

And yet the problem persists.  The Academic Senate has stressed this issue repeatedly and with great force.  A recent letter from the UCLA Divisional Senate's Executive Board has pointed its finger at the problem--the need for renewed state funding.   It is time for the administration to do something to fix it--and something that doesn't simply damage other parts of the academic endeavor.

UCOP will continue--as they always do--to insist that we cannot get more money out of the state to pay for what needs to be done.  But let's press on that point a little more.  It is certainly possible that we are heading for a recession--the Federal Reserve seems determined to induce one to put labor in its place.  But does that mean that the state doesn't have the capacity to respond to an emergency at the University?  Despite all the talk about a budget shortfall, Dan Mitchell at the UCLA Faculty Association Blog has been pointing out that the situation is far less clear than the Legislative Analyst is insisting (and the University is repeating).  For one thing, revenues have been higher than expected and that even with the possibility of a downturn the state has around 90 billion dollars in usable reserves. If the state won't help it's not because of economic necessity but a matter of political choice.  After all, the Governor had no problem finding $500 million to pay for a private immunology research park at UCLA that provides little, if any, real benefit to the campus academic program.  The Governor and the state can do more for the educational core of the University than they are doing: and if UCOP and the Regents can't show the state how necessary that is, then one wonders again what their purpose is.  

I want to make one final point.  UC is the research university of the state and UC insists that graduate education is at the heart of its purpose.  But if UCOP actually agrees with that then the question must be: what do we need to do to have academic graduate education in a sustainable form?  What resources do we need to enable students to both contribute to the larger functioning of the university and to pursue their studies?  Are we willing to have only graduate programs where students have family money or have already flipped a startup?  Or where they are here to gain an additional credential to take back to their jobs?  Does UCOP remain committed to UC's contributions to disciplines across the spectrum of knowledge?  Or does it only care about graduate students (and others) as cheap and disposable labor?  

I don't expect that these negotiations or this strike can answer or settle these questions.  But UC is at a crossroads and the university--especially its leadership--must face up to that.  The long-term question raised by the strike is whether UC will continue as a research university; if we don’t make it possible for future scholars to attend, we will have forfeited our purpose.  There is an opportunity here to take the first steps towards creating a new sustainable vision of a twenty-first century research university.  Or we can continue as we have in decline.  The choice ultimately is UCOP's and the Regents'.

****

(I've focused here on the ASE unit because the Student Researcher Unit is admittedly a more complicated problem.  The vast majority of GSRs are supported by external grants and those grants have both limits and their own rules.  To some extent UC has been negotiating with someone else's money.  That doesn't mean the situation is impossible but rather that it has to be implemented in such a fashion as to protect Principal Investigators from damaging unintended consequences.)

Posted by Michael Meranze | Comments: 0

Sunday, March 22, 2020

Sunday, March 22, 2020
Shutdowns are now spreading as fast as the coronavirus. On March 19, Gov. Gavin Newsom ordered 40 million Californians to stay home, claiming that the infection rate puts the state on track for 25.5 million infections.  The order has no end date.  New York and other states and counties have since followed suit: by noon on March 21st, 75 million US residents were under some kind of lockdown.

In this post I'm going to talk about what I've learned during a sustained effort to apply analytical expertise to a topic outside of my normal subject areas, as I try to build a base for a series of citizen judgements about health policy, and also the related areas of educational and economic policy that I know more about.

This learning process has changed my mind about a number of Covid-related issues: for example, when I learned March 10th of UCSB's shutdown--at the end of my senior seminar, thanks to Jenna, multitasking on her email again!--I was a skeptic about the benefits of widespread closures. Now I'm a believer: I think that widespread social distancing is our only chance to avoid levels of infection that would overwhelm hospitals and clinics and lead to much excess death.  At the same time, I'm also more optimistic about reducing infections than I was a week ago.

The main part of this post close-reads the one published infection model that I've been able to find-Neil Ferguson et al.'s paper, from Imperial College London.  The U.S. Centers for Disease Control and Prevention (CDC) has not released its modeling, though it was discussed in a bootleg version by the New York Times.  My caveat up front is that the SARS-CoV-2 infection model I analyze does not offer any certainty about the future. But I will talk about the powers of the suppression regime we've entered into, and how the disease might be made less deadly than many of us now assume.

An overview:
  • The policy of virus suppression does appear to reduce Covid-19's spread. I'll define this and other terms below, since terminology is all over the place in media reports. (The one journalist I've found to have interviewed Neil Ferguson--Nicholas Kristof of the New York Times--conflates mitigation and suppression.)  Suppression has worked well in South Korea, Singapore, and post-lockdown Hubei in China when social distancing is combined with mass testing. 
  • The U.S. simply does not have the testing capability to do the most effective form of suppression.  (Santa Barbara County has brilliant and frequently exercised emergency services.  As of March 22nd it has 13 confirmed Covid-19 cases, a shortage of test kits, and 200 tests out whose results won't be in for awhile.) The U.S. has not been able to do contact-tracing, which would have allowed a much more efficient form of isolation than the mass version we're doing now.  In spite of some encouraging reports of new equipment coming on line, the U.S. is in the midst of what statistician John A. Ioannidis calls an "evidence fiasco," and its public health capacities have been downsized (personnel down 20 percent since 2008, according to David Himmelstein) to the point that we're likely stuck with the crudest, most disruptive, and most economically damaging form of suppression. 
  • This has implications for rebuilding social and public capabilities that I'll save for a later post on how SARS-CoV-2 is putting neoliberalism out of its misery--and how to keep that from causing further misery for diverse publics.
  • A theory point: public officials are using projections of high infection and death rates to install suppression regimes, but these suppression regimes are designed to invalidate the numbers that justify them (by producing much lower rates of infection and death).  Either you infect 81 percent of California by doing nothing, or you lockdown California and get a much lower infection percentage.  You don't do both.  I elaborate on this point because it's important for people not to think lockdown = death (regardless), but to think the opposite.  
  • A policy point: public officials must not bullshit the public with exaggerated numbers, withheld models (CDC!), and mashup policies that will encourage cheating. Newsom did the right thing, but he didn't give clear, honest reasons for it.  That has to change.
To take the last point first: Where did Newsom get the number that he used to shut down most of the state economy without an end date? We don't actually know. The LA Times reports, "the governor’s office declined to provide an explanation of the state’s projection that 25.5 million Californians will be infected with this virus. Instead, a spokesman for the governor said the state’s mitigation efforts could lower that estimate."

The last part is true (though "mitigation" is the wrong word, as I'll explain), but the public should be told the source.  In the meantime, I'll guess that Newsom's people got that number from the now-famous pandemic modeling paper I mentioned at the top, Ferguson et al. Their baseline reproduction number (Ro) for the disease is 2.4--meaning each case typically goes on to infect 2.4 other people. You can get to 25.5 million Covid-19 infections by taking California's Covid infection count when Newsom spoke--around 1000--and giving it an exponent of 2.45.  (Updated: See Akos Rona-Tas's correction of this speculation below, under March 23.)

The Ferguson paper derived that Ro in part from from the spread of the virus in Wuhan, China, before the government began its many non-pharmaceutical interventions (NPIs)--forced quarantining, widespread testing, etc.  (Wuhan's Ro was previously reported as 3.11).  The projection that 56 percent of the California population will become infected appeared as a math error in Newsom's letter to Trump requesting a hospital ship: it's actually 64 percent, or alternately, 39.56 million Californians * 0.56 = 22.15 million inflections.  The point isn't the bad math but the need to offer credible numbers and explain clearly where they come from.  People will take honest, fully disclosed estimates more seriously.  Health policy needs to be open to establish the trust that government now desperately needs, to discourage cheating, and to allow meaningful democratic judgment about overall policy. 

Public officials, including Newsom, seem to be now focused on using big numbers to stampede the masses into social distancing, RTFN. This is understandable, since, in the suppression arsenal, social distancing is pretty much all we've got.  But one major effect of their statements is to muddle the difference between mitigating and suppressing a pandemics: the former allows infection rates like 55 percent. The latter slows growth rates and can put them into reverse.  Suppression also requires a rigor that people won't pursue if they don't understand the massive difference it can make.

To put this in the form of a question, could the U.S. and the European Union (and other regions) achieve suppression and thus decline in the number of new cases?  The current tracking in California is not good.



But look at  the South Korean case pattern.



South Korea had our hockey stick and has now bent it down into slower growth of new cases.  As is now widely discussed, South Korea, Singapore, Hong Kong, Taiwan, and now Wuhan have slowed the spread.  This is the effect of suppression strategies.  There's some important news here, which is that Covid-19 infections rates can be reduced, and its case-mortality rate can be kept low (not the 3.4 percent reported by the World Health Organization, but about 1 percent in South Korea, or 0.54 percent for cases under age 60).  Germany currently has a 0.3 percent case-mortality rate. SARS-CoV-2 kills people by doing horrible damage to their lungs (see the images around 0'30" in this Santa Barbara Cottage Health grand rounds lecture).  And yet the virus does so little to so many other victims that 86 percent of cases in China were undocumented prior to travel restrictions. 

On to the model: the Ferguson et al. paper draws on previous work with influenza pandemics to compare three responses-- doing nothing, mitigation, and suppression.  Doing nothing seems to have been the preferred option of the Boris Johnson and Donald Trump governments until about March 15th-16th  (Johnson, Trump), with the Johnson government allegedly working on a trust that infection would create "herd immunity" without disrupting the economy.  At least in the UK, they seem to have taken on board the Ferguson et al. calculations that "doing nothing" will lead to infection in 81 percent of the population (at  2.4 Ro), producing 510,000 deaths in the UK, plus 2.2 million deaths in the United States, both over a 2 year period.

With doing nothing now ruled out, the alternatives that Ferguson et al. modeled are mitigation or suppression. Suppression is China after January 23rd and South Korea, among others; Britain is moving to suppression with one escalating announcement after another (which may defeat the purpose).  Some parts of the U.S. are now doing suppression, including New York and California. The Ferguson paper divides these two strategies into two groups of non-pharmaceutical interventions (NPIs).

 The most effective set of mitigation measures are:
  • Case isolation in the home (CI): symptomatic cases stay at home for 7 days.
  • Voluntary home quarantine (HQ): all members of a household with a case stay home for 14 days
  • Social distancing of those over age 70 (SDO).
Note that this falls short of "lockdown," which includes social distancing for the whole population (SD) and, in most cases, closures of schools and universities.

Mitigation is the famous "flattening the curve." The serious cases that need hospital services are pushed out over time, with the goal of relieving some of the stress on the health care system. Mitigation is "predicted to reduce peak critical care demand by two-thirds and halve the number of deaths" (8).  Assuming the ratio of infections to critical care cases is constant, and that the syntax means mitigation yields 2/3rds of the "do nothing" infection rate, this leads to 54 percent of the population being infected, and to 1.1 million deaths in the U.S.  (When Kristof quotes Ferguson saying his best case is 1.1 million deaths, I think he ran Ferguson et al.'s two regimes together: in my view, the sentence should read, "his best case for mitigation" is 1.1 million deaths.

Clearly mitigation isn't good enough.  A million deaths in the U.S. is unacceptable, and the model suggests that under mitigation health care systems will still be overwhelmed (10). Since something like Italy's hospital crisis and high fatalities are the combination everyone wants to avoid, the UK, the EU, California, and now several other U.S. states have moved into suppression.

A side note: I would normally read the quotation to mean that mitigation reduces peak care demand (and infections) by 2/3rds, down to 1/3rd of their previous level, which is a 27 percent infection rate.  I don't know if that's what Ferguson et al. meant, but it's still more than double this year's seasonal flu rate (so far this season, flu has killed 22,000 Americans). 

Much of the U.S. is now following Italy, France, Spain, and other countries into suppression. The key benefit is that it reduces the reproduction number (Ro) to close to 1 or below, which China has shown is feasible.  Here's a nice stretch goal for the West.



 In the Ferguson et al. model, suppression adds to mitigation's measures:
  • school and university closures (PC)
  • social distancing expanded to the whole population (SD)
I've reproduced the table that shows the results. I'd recommend starting in column 1 with the baseline Ro of 2.4 (510,000 "do nothing" deaths) and look at the medium case of 200 (which means that the full suppression program is suspended when ICU cases fall below 200 in Great Britain, and are re-engaged when they rise above that number). (The paper does not have a similar table for the U.S.) 
California is now doing the full suppression program.  If you look at the right-hand column under Total Deaths you can see the results.  Deaths in Great Britain drop from 510,000 to 24,000, or by a factor of around 20.  The U.S. equivalent would be 110,000 deaths, not Kristof's 1.1 million.

Note two other features of this model.  The interventions all have finite periods: mitigation is modeled over 3 months (to mid-June 2020) and suppression over 5 months (to mid-August 2020).  They don't extend to the full 18 month "vaccine" period, nor are they open-ended.

Second, they are adjusted according to thresholds of infection and hospitalization that can be selected and monitored.  Governments have a great deal of agency here.  In other words, this new coronavirus is bad, but it is not an irresistible event like a giant asteroid hitting the earth.

A big catch is that the versions of suppression in South Korea, Taiwain, Hong Kong, Singapore, and China include mass testing.  Neither the US nor the UK have done this, nor do we seen to have the capability to ramp this up.  There's been much excoriating commentary on this point.  I had been hoping that UC Health could make a big difference to California public health. A potentially exciting March 14th headline, "UC has a solution to the national shortage of coronavirus testing," didn't, with our weak public sector, mean UC is gearing up mass testing for the public, but that it has a private test for its own patients.  I've heard ambitious UC plans--in this week's board meetings, one UC regent suggested for the installation of MASH hospitals on empty land that UC owns. But because of testing and equipment shortages, UC medical centers have to focus on protecting themselves (see 0'44"-0'49" or so in this very useful UCSF infectious diseases division' grand rounds). I'll end by adding a few items to the summary list above:
  • The virus is going to be terrible for public health workers, who deserve not only massive sympathy and support but also personal protective equipment, which they may now have more hope of getting.  Mass testing also depends on cranking out PPE.
  • Public health interventions in Asia have had enough success with suppression to give  credibility to the Imperial College model--most interestingly, its suggestion that deaths can be reduced by an order of magnitude. 
  • On the other hand, hospital access remains a potential catastrophe.  Full suppression reduces ICU need to 1/3rd of "doing nothing."  In the bootlegged C.D.C.’s scenarios, "2.4 million to 21 million people in the United States could require hospitalization, potentially crushing the nation’s medical system, which has only about 925,000 staffed hospital beds. Fewer than a tenth of those are for people who are critically ill."
  • Still, suppression seems to make a big difference even if it is leaky: the Ferguson et al modeling assumed incomplete success and still got major reductions (see Table 2 on page 6).
  • The US has a weak health system (or no health "system" at all, as Robert Reich rightly observes). This is a big problem. But the US has some other advantages: a lot of really good, dedicated health personnel, lower population density than Europe's or East Asia's and, ironically, dependency on the self-isolating feature of private cars.  Our version of suppression might be more successful than we now expect.
  • Officials should give expiration dates to the current suppression regimes. They can be extended later, depending on conditions.  As I noted, the Ferguson et al. model assumes a kind of regular adjusting depending on infection numbers. (Hong Kong has reimposed quarantine and testing on arrivals after an uptick in cases.) Indefinite lockdowns are bad for both people and the economy.  Once people are scared indoors, and the infection curve is bent like South Korea's, governments should throw the lockdown into partial reverse, lest they create another Great Depression x 2.4.
I'll move on to political, economic, and university dimensions in other posts.  From the Haley Street Bunker: stay well, and keep your distance! 
Monday March 23rd

Statistical chemist Michael Levitt hammers on one of this post's key points: "The virus can grow exponentially only when it is undetected and no one is acting to control it."  The media, he says, should focus not on total number of cumulative cases but on rates of growth of new cases. 
Speaking of which,  South Korea's number dropped again, so the chart looks a bit better today.

The coming U.S. health crisis will owe much to a social system that can't anticipate non-market public needs.   That's not what this WaPo piece says in so many words, but it has all the raw material--shortages of masks, gowns, tests, ventilators.  What aren't we short of Covid-wise?

This piece, by a Mass General physician, specifies how the market power of large hospitals will mal-distribute emergency equipment: "We are currently taking an every-hospital-system-for-themselves approach, in which some hospitals will surely say “we’ll take them all” while others will lack the capital to make such large purchases in advance and therefore will be reliant on FEMA, which will be forced to ration scarce, lifesaving equipment. These already cash-strapped hospitals serving poorer populations will soon be put in even greater jeopardy.

From Akos Ronas-Tas (Prof of Sociology, UC San Diego): How Newsom got his numbers (over half of Californians being infected) is a mystery, but it is surely not by raising 1000 to the power of 2.45. I am no epidemiologist either, but the Ro produces an estimate only if you specify how many generations of infections you count. So if the base (generation 0) is 1000 and Ro is 2.4 (used by Ferguson), the first generation will be 1000*2.4=2400, the second generation 2400*2.40 = 5760 and so on. The total number infected will be by then 1000+2400+5760=9160, adding up generations 0,1 and 2. In the Ferguson paper they use a 6.5 day generation time. The key here is that Newsom made his prediction for 8 weeks out. So he is counting roughly 8 generations. The number of newly infected in the 8th generation will be 1000*2.4^8=1,100,753. You have to add to this those from the earlier generations. That will give you the total number of those infected (roughly, 1.9 million). Some of them will have recovered by then and happily immune, others would have died. I don’t see how this adds up to 25.5 million, either as the number of all people who have ever been infected, let alone all people needing care at a certain date.  You would get to a cumulative 26 million in 11 generations with 15 million new infections. That is 71.5 days, 10 weeks, still only late May.

You can make the model more complicated. Ferguson assumed a variable R in each generation and it should also vary across generation as the number of people getting immunity increases.

Here is a nice calculator that adds a few other considerations.

The real scary numbers come from the healthcare system. There are only 74,000 hospital beds in California, and 6,300 in SD  county, only 32% of which are available. This is probably similar in the state overall. But what you really need is ICU beds (only 800 available in SD county). There are about 50,000 ICU beds in the entire US and about 100,000 respirators. And you also have to add to this that beds, even ICU beds are useless unless you have trained personnel attending to them. So if we suppose only 2 million people being sick at the same time in CA, and only 10%  (100,000) needing hospital beds and only 4% (40,000) ICU beds, we have a major catastrophe. 


Tuesday, March 24

On the duration of the shutdown, Jeffrey Sachs invokes the example of China. Their ironclad version of suppression, including mass testing, suggests the spread of SARS-CoV-2 can be stopped in 60 days.  Sachs says 60-90 days.

This is not what's happening in Italy, where exasperated mayors berate their citizenry.

Buzzfeed does funniest home videos for the Covid quarantine

As India's government orders a 3-week "total lockdown,"  nearly 60 percent of the U.S. population is not under stay-at-home orders or being mass-tested.  The U.S. is therefore not, overall, doing suppression, but mitigation of SARS-CoV-2.  Note that this predicts some "flattening of the curve" of infection--reducing but not eliminating the overload on health care-- but not reversing the spread of the disease (Ro stays above 1). Some red state politicians are actively resisting social distancing (Texas, Mississippi), as is POTUS himself.

Speaking of testing, California is way behind New York, working "piecemeal."
This piecemeal approach, said Harvard epidemiologist Michael Mina, is a key problem with testing in California and nationwide.
“We have a decentralized healthcare system and we have no way to scale for government means,” Mina said. “Everything is privatized, everything is individualized in our country and it’s become our Achilles’ heel in this case.”
   
Wednesday, March 25  It's Bailout Day!

NYT summaryEssential first take by David Dayen. Trigger warning: wow will this analysis not reassure you that any economic reforms are in the offing.

Yes we have no protection: "A very American story about capitalism consuming our national preparedness and resiliency"  Painful contrast between the American scramble for the most basic equipment and Germany's highly successful health system for radically minimizing fatalities.

Half-assed LAT reporting on the coming fiscal crisis of the state of California.  No real info, and other annoying stuff. How do you find the school lobbyist who will say this will be really bad for the schools, and then add, "under current law, it is likely that schools could withstand a total statewide revenue loss of around $5 billion. But more than that and schools will face significant problems."  So your own lobbyist just told the state that a 7 percent cut is fine. 

Where's higher ed in the stimulus bill? Inside Higher Ed's summary:
Six-Month Loan Deferment in Senate Bill
March 25, Noon. Student loan borrowers would be allowed to defer making payments for six months, without interest, through Sept. 30, according to a summary of the $2 trillion stimulus package Senate leaders agreed to at 1 a.m. Wednesday morning. The full bill is still being written and hasn’t yet been released.
But according to summaries of the bill making the rounds among education advocacy groups and obtained by Inside Higher Ed, the measure will also include changes sought by advocates such as not requiring Pell Grant students to repay money to the federal government if their terms are disrupted by the coronavirus emergency.
However, the bill is expected to disappoint advocates who had embraced Democratic proposals in the House and Senate, in which the federal government would have made the payments on behalf of borrowers, reducing their balances by at least $10,000. The summary did not mention any loan cancellation.
A separate summary contains $30.75 billion in grants to “provide emergency support to local school systems and higher education institutions to continue to provide educational services to their students and support.” That amount appears be about $29 billion less than what higher education institutions could potentially get in the bill proposed by House Democrats, but $21 billion more than what Senate Republicans had initially proposed, one higher education lobbyist said.  Associations representing institutions that were disappointed with the previous proposals were still waiting for the full bill before they commented on the level of funding.
The bill requires the secretary to defer student loan payments, principal, and interest for six months, through Sept. 30, 2020.

Thursday, March 26

Covid revealing America's rear guard place in the world 

Zero Hedge's mashup of hostility to the shutdown, mixing vulnerability of SARS-CoV-2 to treatment (it isn’t a superbug) with statistical problems (extensive) with lockdown’s effect on the economy (bad but unavoidable). Playing rural roulette because lockdowns are Democrat.

Suppression works, says none other than Neil Ferguson!!
He said that expected increases in National Health Service capacity and ongoing restrictions to people’s movements make him “reasonably confident” the health service can cope when the predicted peak of the epidemic arrives in two or three weeks. UK deaths from the disease are now unlikely to exceed 20,000, he said, and could be much lower.
But don't go back outside! Because, on the other hand,
This measure of how many other people a carrier usually infects is now believed to be just over three, he said, up from 2.5. “That adds more evidence to support the more intensive social distancing measures,” he said.
Special bonus for modeling fans: Oxford now has a model too. More on this coming soon.

Hope for a UK Covid-19 home test within two weeks.

And We're Number 1 - in Covid-19 cases.
Posted by Chris Newfield | Comments: 4

Friday, January 10, 2020

Friday, January 10, 2020
Earlier this week, the AAUP issued a new statement entitled In Defense of Knowledge and Higher Education.   In it, the AAUP offers both a defense of the importance of knowledge opposed to opinion and a critique of the growing efforts to undermine the authority of scholars and expertise.  It helps clarify the relationship between Academic Freedom and Free Speech and marks the importance of defending the ongoing collective work of scholarly and academic communities.  As it concludes:

In 1915 the founders of the AAUP characterized the university as “an inviolable refuge” from the “tyranny of public opinion,” as “an intellectual experiment station, where new ideas may germinate,” but also as “the conservator of all genuine elements of value in the past thought and life of mankind which are not in the fashion of the moment.” On that basis they asserted “not the absolute freedom of utterance of the individual scholar, but the absolute freedom of thought, of inquiry, of discussion and of teaching, of the academic profession.”21 They pledged, as do we, to safeguard freedom of inquiry and of teaching against both covert and overt attacks and to guarantee the long-established practices and principles that define the production of knowledge.
It is up to those who value knowledge to take a stand in the face of those who would assault it, to convey to a broad public the dangers that await us—as individuals and as a society—should that pledge be abandoned.
I urge everyone to read and share it.


Posted by Michael Meranze | Comments: 1

Saturday, September 7, 2019

Saturday, September 7, 2019
The rule of infrastructure is that no one thinks about it until it breaks.  This week, I was at the annual conference of the International Society for Scientometrics and Informetrics when I bumped into an example of how the massive flow of bibliometric data can suddenly erupt into the middle of a university's life."

Washington University in St. Louis (WashU) has a new chancellor.  He hired a consulting firm to interview and survey the university community about its hopes, priorities, views of WashU's culture, and desired cuts.  The firm found a combination of hope and "restlessness," which the report summarized like this: members of the WashU community
want to see a unifying theme and shared priorities among the various units of the university. Naturally, stakeholders want to see WashU rise in global rankings and increase its prestige, but they want to see the institution accomplish this in a way that is distinctively WashU. They are passionate about WashU claiming its own niche. Striving to be “the Ivy of the Midwest” is not inspiring and lacks creativity. Students feel the university is hesitant to do things until peer institutions make similar moves.
"As always, the university needs to become more unique and yet more unified, and to stop following others while better following the rankings.

The report might have gone on to overcome these tensions by organizing its data into a set of proposals for novel research and teaching areas.  Maybe someone in the medical school suggested a cross-departmental initiative on racism as a socially-transmitted disease. Maybe the chairs of the Departments of Economics and of Women, Gender, and Sexuality Studies mentioned teaming up to reinvent public housing with the goal of freeing quality of life from asset ownership.  These kinds of ideas regularly occur on university campuses, but are rarely funded.

New proposals is not what the report has to offer. It generates lists of the broad subject areas that every university is also pursuing (pp 4-5). It embeds them in this finding:
The other bold call to action that emerged from a subset of interviewees is internally focused. This subset tended to include new faculty and staff . . . and Medical School faculty and senior staff (who perceive their [medical] campus enforces notably higher productivity standards). These interviewees are alarmed at what they perceive as the pervading culture among faculty on the Danforth Campus [the main university]. They hope the new administration has the courage to tackle faculty productivity and accountability. They are frustrated by perceived deficiencies in research productivity, scholarship expectations and teaching quality. A frequently cited statistic was the sub-100 ranking of WashU research funding if the Medical School is excluded. Those frustrated with the Danforth faculty feel department chairs don’t hold their faculty accountable. There is too much “complacency” and acceptance of “mediocrity.” “There is not a culture of excellence.” . . . Interviewees recognize that rooting out this issue will be controversial and fraught with risk. However, they believe it stands as the primary obstacle to elevating the Danforth Campus –and the university as a whole –to elite status. 
Abstracting key elements gets this story: One group has a pre-existing negative belief about another group.  They think the other group is inferior to them. They also believe that they are damaged by the other's inferiority.  They offer a single piece of evidence to justify this sense of superiority. They also say the other group's leaders are solely responsible for the problem.  They have theory of why: chairs apply insufficient discipline. They attribute the group's alleged inferiority to every member of that group. 

Stripped down like this, this part of the report is boilerplate bigotry.  Every intergroup hostility offers some self-evident "proof" of its validity.  In academia's current metrics culture, the numerical quality of an indicator supposedly cleanses it of prejudice.  Lower research expenditures is just a fact, like the numbers of wins and losses that create innocent rankings like baseball standings.  So, in our culture, the med school can look down on the Danforth Campus with impunity because it has an apparently objective number--relative quantities of research funding.

In reality, this is a junk metric.  I'll count some of the ways: 
  • the belief precedes the indicator, which is cherry-picked from what would be a massive set of possible indicators that inevitably tells a complicated story.  (A better consultant would have conducted actual institutional research, and would never have let surveyed opinions float free of a meaningful empirical base.) 
  • the indicator is bound to Theory X, the a priori view that even advanced professionals “need to be coerced, controlled, directed, and threatened with punishment to get them to put forward adequate effort" (we've discussed Theory X vs. Theory Y here and here). 
  • quantity is equated with quality. This doesn't work--unless there's a sophisticated hermeneutical project thatt goes with it.  It doesn't work with citation counts (which assume the best article is the one with the most citations from the most cited journals), and its use has been widely critiqued in the scientometric literature (just one example). Quantity-is-quality really doesn't work with money, when you equate the best fields with the most expensive ones. 
  • the metric is caused by a feature of the environment rather than solely by the source under study. The life sciences get about 57 percent of all federal research funding, and the lion's share of that runs through NIH rather than NSF, meaning through health sciences more than academic biology. Thus every university with a medical school gets the bulk of its R&D funding through that medical school; note medical campuses dominating R&D expenditure rankings, and see STEM powerhouse UC Berkeley place behind the University of Texas's cancer center. (Hangdog WashU is basically tied with Berkeley.)
  • the report arbitrarily assumes only one of multiple interpretations of the metric. An alternative interpretation here is (1) the data were not disaggregated to compare similar departments only, rather than comparing the apple of a medical school to the orange of a general campus (with departments of music, art history, political science, etc.)  Another is (2) the funding gap reflects the broader mission of arts and sciences departments, in which faculty are paid to spend most of their time on teaching, advising, and mentoring.  Another is (3) the funding gap reflects the absurd underfunding of most non-medical research, from environmental science to sociology.  That's just three of many.
  • the metric divides people or groups by replacing relationships with a hierarchy. 
  • This last one is a subtle but pervasive effect that we don't understand very well.  Rankings make the majority of a group feel badly that they are not at the top. How much damage does this do to research, if we reject Theory X and see research as a cooperative endeavor depending on circuits of intelligence?  Professions depend on a sense of complementarity among different types of people and expertise--she's really good running the regressions, he's really good with specifications of appropriate theory, etc. The process of "ordinalizing" difference, as the sociologist Marion Fourcade puts it, discredits or demotes one of the parties and can this spoil professional interaction.  Difference becomes inferiority.  In other words, when used like this, metrics weaken professional ties in an attempt to manage efficiency.

So if Washington University takes these med school claims literally as fact, and doesn't scramble to see them as expressions of a cultural divide that must be fixed, the faulty metric just killed their planning process.

Let's take a step back from WashU.  The passage I've cited does in fact violate core principles of  professional bibliometricists. They reject these kinds of "simple, objective" numbers and their use them as a case-closed argument.  Recent statements of principle all demand that numbers be used only in the context of qualitative professional judgment: see DORA, Metric Tide, Leiden, and the draft of the new Hong Kong manifesto. It's also wrong that STEM professional organizations are all on board with quantitative research performance managment. Referring to the basic rationale for bibliometrics, "that citation statistics are inherently more accurate because the substitute simple numbers for complex judgements"--it was the International Mathematicians Union that in 2008 called this view "unfounded" in the course of a sweeping critique of the statistical methods behind Journal Impact Factor, the h-index, and other popular performance indicators. These and others have been widely debated and at least partially discredited, as in this graphic from the Leiden Manifesto:

The Leiden and Hong Kong statements demand that those evaluated be able to "verify data and analysis."  This means that use, methods, goals, and results should be reviewable and also rejectable where flaws are found.  All bibliometricists insist that metrics not be taken from one discipline and applied to another, since meaningful patterns vary from field to field.  Most agree that arts and humanities fields are disserved by them. In the U.S., new expectations for open data and strictly contextualed use were created by the Rutgers University faculty review of the then-secret use of Academic Analytics.
The best practitioners know that the problems with metrics are deep. In a Nature article last May,  Paul Wouters, one of the authors of the Leiden manifesto, wrote with colleagues,
    Indicators, once adopted for any type of evaluation, have a tendency to warp practice5. Destructive ‘thinking with indicators’ (that is, choosing research questions that are likely to generate favourable metrics, rather than selecting topics for interest and importance) is becoming a driving force of research activities themselves. It discourages work that will not count towards that indicator. Incentives to optimize a single indicator can distort how research is planned, executed and communicated.
    In short, indicators founder over Goodheart's Law (308), which I paraphrase as, "a measure used as a target is no longer a good measure."  Thus the Leiden manifesto supports the (indeed interesting and valuable) information contained in numerical indicators while saying they should be subordinated to collective practices of judgment.
     
    Given widespread reform efforts, including his own, why, in May, did Wouters lead-author a call in Nature to fix bad journal metrics with still more metrics, this time measuring at least five sub-components of every article?  Why does Michael Power's dark 1990s prediction in The Audit Society still hold: failed audit creates more audit?  Why are comments like those in the WashU report so common, and so powerful in academic policy? Why is there a large academic market for services like Academic Analytics, which sells ranking dashboards to administrators precisely so they can skip the contextual detail that would make them valid? Why is the WashU use of one junk number so typical, normal, common, invalid, and silencing? What do we do given that we can't criticize one misuse at a time, particularly when there's so much interest in discrediting an opposition with them?

    One clue emerged in a book I reviewed last year, Jerry Z. Mueller's The Tyranny of Metrics. Mueller is an historian, and an outsider to the evaluation and assessment practices he reviewed.  He decided to look at how indicators are used in a range of sectors -- medicine, K-12 education, the corporation, the military, etc.--and to ask whether there's evidence that metrics cause improvements of quality. Mueller generates a list of 11 problems with metrics that most practitioners would agree with.  Most importantly, while they emerged when metrics were used for audit and accountability, they were less of a problem when used by professionals within their own communities.  Here are a couple of paragraphs from that review:
    Muller’s only causal success story, in which metrics directly improve outcomes, is the Geisinger Health System, which uses metrics internally for improvement. There ‘the metrics of performance are neither imposed nor evaluated from above by administrators devoid of firsthand knowledge. They are based on collaboration and peer review’. He quotes the CEO at the time claiming, ‘Our new care pathways were effective because they were led by physicians, enabled by real‐time data‐based feedback, and primary focused on improving the quality of patient care’ (111). At Geisinger, physicians ‘who actually work in the service lines themselves chose which care processes to change’.
    If we extrapolate from this example, it appears that metrics causally improve performance only when they are (1) routed through professional (not managerial) expertise, as (2) applied by people directly involved in delivering the service, who are (3) guided by nonpecuniary motivation (to improve patient benefits rather than receive a salary bonus) and (4) possessed of enough autonomy to steer treatment with professional judgment.
    I'd be interested to know how the bibliometrics community would feel about limiting the use of metrics to internal information about performance with these four conditions.  Such a limit would certainly have helped the WashU case, since the metric of research expenditures could be discussed only within a community of common practice, and not applied by one (med school) group to another (Danforth Campus) in demanding accountability.

    Another historian, John Carson, gave a keynote address at the ISSI conference that discussed the historical relation between quantification and scientific racism, calling for "epistemic modesty" in our application of these techniques.  I agree.  Though I can't discuss it here, I also hope we can confront our atavistic association of quality with ranking, and of brilliance with a small elite.  The scale of the problems we face demands it.

    In the meantime, don't let someone use a metric you know is junk until it isn't.
Posted by Chris Newfield | Comments: 4