tag:blogger.com,1999:blog-1170716682680204889.post1922007870353633134..comments2024-02-11T20:52:16.296-08:00Comments on Remaking the University II: Knowledge Rebellion: When the Metrical Tail Wags the University DogChris Newfieldhttp://www.blogger.com/profile/01078395415386100872noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-1170716682680204889.post-39172672218224981792019-09-23T09:11:24.059-07:002019-09-23T09:11:24.059-07:00this is a great comment, Eric. And I completely a...this is a great comment, Eric. And I completely agree we can and should use measurement to figure out how things work as part of complex interpretative strategies. The Wash U claim is unusually bad, and ironically all the more powerful for its badness, which is the problem our research has been focusing on quite a bit. many thanks. @<a href="#c2553173384346140638" rel="nofollow">Eric Archambault</a>Chris Newfieldhttps://www.blogger.com/profile/01078395415386100872noreply@blogger.comtag:blogger.com,1999:blog-1170716682680204889.post-25531733843461406382019-09-19T04:51:08.306-07:002019-09-19T04:51:08.306-07:00Chris, this is a really an important reflection. I...Chris, this is a really an important reflection. I'm happy you outlined the importance of NIH funding in that piece. This is quite stable pattern across countries - clinical medicine and biomedical research get the lion share of funding, and yet represent only a small portion of the researchers and faculty in most countries – meaning these researchers are extraordinarily well funded compared to others in the research system. <br /><br />As you note, health domain researchers often complain about the "productivity" of others, while in fact they are only referring to research outputs, bear with me here. This is problematic as there is mounting evidence of decreasing return to scale in research funding, that is, there is less return to research dollars as research projects, or researcher funding amount goes up, including publications in scholarly journals. So the problem here is that medical faculty do not really measure “productivity”, which is the quantity of outputs per the quantity of inputs, they measure “production”, that is outputs. <br /><br />This conflation of productivity and production has adverse consequences. This of course allows those who receive the most monies to have greater output and to brag about their larger output. However, there is little or no discussion relative to how effectively this money is used to create new knowledge, educate students, and provide services to society compared to alternative research funding models. My colleague Vincent Larivière and I once did a study on productivity in Quebec, where we have excellent funding data, and if we used 4 or 5 articles equivalent factor to books, we ended up with productivity measures largely in favour in the humanities. This is not surprising as the humanities have a tradition of working on a shoestring budget. I discussed this sadly unpublished finding with Gunnar Sivertsen, a researcher at the Nordic Institute for Studies in Innovation, Research and Education. Gunnar confirmed that he had found the same results. Don’t get me wrong here, I’m surely not advocating distributing research funding based on productivity, I’m merely pointing out to the fact that with better metrics we obtain better explanations of how the research system work, and this system is complex, and little understood.<br /><br />I’m not certain that indicators and metrics are blatantly misused most of the time. I feel rankings get the lion share of visibility, but many aspects in the research system are measured simply to understand how things work, and we never hear about this. Many use cases are simply not mediatized. That is not to say we shouldn’t be shocked by “junk metrics”. If we are to have an enlightened discussion and make better decisions, we need scholars and research managers to voice their concern about metrics abuse, but also to outline cases of best practices and adequate use of both qualitative and quantitative analyses of research, research management, and research policy.<br />Eric Archambaulthttps://www.science-metrix.comnoreply@blogger.comtag:blogger.com,1999:blog-1170716682680204889.post-79546619165676078842019-09-13T01:43:46.733-07:002019-09-13T01:43:46.733-07:00yes that's a really good chapter in a very acc...yes that's a really good chapter in a very accessible and engaging book. thanks for the reference.@<a href="#c1576967173799849519" rel="nofollow">MP</a>Chris Newfieldhttps://www.blogger.com/profile/01078395415386100872noreply@blogger.comtag:blogger.com,1999:blog-1170716682680204889.post-15769671737998495192019-09-07T10:46:27.146-07:002019-09-07T10:46:27.146-07:00Excellent piece Chris, I'm not sure if you'...Excellent piece Chris, I'm not sure if you're familiar with Cathy O'Neil's chapter in _Weapons of Math Destruction_ (2016) in which she tells the story of how _US News and World Report_'s decision in 1983 to start ranking 1,800 colleges and universities had tremendous (and baneful) ripple effects. I don't actually believe that we'd not have metrics today were it not for that fateful decision by a news magazine keen for a new way of attracting readers. But the story is well told and provides a useful back history for what you're describing in today's hyper-metricized academy.MPhttps://www.blogger.com/profile/08984136164543370547noreply@blogger.com