Monday, August 12, 2019

The Automated University

by Jonathan Rees, Professor of History, Colorado State University, Pueblo

A few weeks ago, I heard from one of my former students who was upset that I had begun teaching online.  She’s a traditionalist, who didn’t appreciate it when I very politely suggested that she needs to get with the times.  “A robot will be teaching your classes in 10 years,” she told me.  Her underlying message here was that online teaching has to be robotic and automatically inferior to the face-to-face variety.  My immediate reaction was to wonder if replacing me with a robot was even possible.

You’d have to be living under a rock to be unacquainted with the idea that automation has become a job killing-machine, and that the situation will only get worse in the future as those killer robots get smarter.  Rather than recap all that literature here, I will simply point you to a good argument that Brian Merchant makes in Gizmodo. “A robot is not ‘coming for’, or ‘stealing’ or ‘killing’ or ‘threatening’ to take away your job,” he argues.  “Management is.”

That’s certainly true for any factory setting.  There is no economic requirement that every turn of a screw that can be automated must be automated.  However, the potential cost-savings of a robot arm doing that job is so great that countless factory owners have embraced automation.  As a scholar of industrialization, I’m very familiar with the ways in which managers once broke jobs down into their component parts.  This practice is widely associated with the turn-of-the-twentieth-century management consultant Frederick Taylor.  Once this division of labor is employed, it becomes possible to replace skilled workers with less-skilled, lower-paid workers – or these days – robots.

“We ought to resist the Taylorization of academia,” writes the popular and prolific academic Tweeter Raul Pacheco-Vega.  “The more time I spend actually concentrating in my work, just reflecting, reading, writing, analyzing data, I realize that we need time, we need space, we need the right conditions to undertake scholarly pursuits. In fact, I’m not convinced that some of the many tasks that professors perform on a daily basis can be automated.”  Of course, heartfelt pleas like this won’t stop academic managers who prioritize efficiency over educational quality from trying to implement their vision nonetheless.  But even if managers want to bring automation to college teaching, whether this goal is even possible deserves close consideration.

While it’s tempting for faculty to see the struggle between quality and efficiency as a clear cut example of good vs. evil, higher education has already benefited from a little automation when it gets properly employed.  For example, there’s an automated program on my campus that tells me or students what graduation requirements whatever student in my office still needs to complete.  Looking through all those requirements had once been the most time-consuming part of the advising process, and students often made mistakes when they tried to do this themselves.  This tool has immeasurably helped everyone involved, but the real problem with automation in a university setting is deciding exactly which parts of the higher education experience are improved by automation, and which ones are unacceptably degraded.

In his book Coders, Clive Thompson argues that the main inspiration for much of the technological innovation of recent years comes from computer programmers aiming to eliminate repetitive tasks.  Don’t want to send a hundred thank-you notes or go shopping for groceries?  Automate the process.  If the computer can’t do what you don’t want to do by itself, it’ll find somebody somewhere who is willing to do it for you.  What has changed in recent years is that Artificial Intelligence (AI) has become good enough that computers can now eliminate repetitive tasks that are actually rather complex.  Algorithms might not do the job quite as well as their human counterparts, but the people doing the automating may very well not care.  Still, here's the catch: faculty do so many different kinds of things that we would have to be replaced by at least several different machines of widely varying effectiveness - and possibly a whole army.

The Academic Division of Labor

When I say the word robot, what do you picture?  C-3PO?  Twiki from Battlestar Galactica?  The Cybermen from Doctor Who?  To borrow a distinction I picked up from a robotics engineer named Tim Enwall, these are multi-task robots who “can understand all languages, process any question, identify and manipulate any object, cover any terrain, etc.”  In fact, “No company in the world can come anywhere close to meeting these expectations right now nor any time soon.”

Robots today are mostly single-task creations, designed to perform one function like turn a screw or weld two pieces of metal together.  They may be guided by computers, but even the most powerful computers cannot master all the functions that a skilled human worker can perform easily.  This is especially true of skilled knowledge workers.  One of the problems with the automation debate is that many of the people who are trying to engage in it from the pro-worker side tend to conflate automation, artificial intelligence, and actual robots. 

Teaching is just one of the functions that modern professors perform.  In my case, my contract requires me to teach, conduct research and perform service.  Each one of these tasks can be broken down into a series of sub-tasks.  For example, lecturing, despite the ideas of the people behind Massive Open Online Courses (or MOOCs, as we used to say back in 2013), is only part of teaching and hardly the most difficult part of teaching at that.  The hard part is helping students process what they’re learning so that they can master its intricacies.  AI can ask follow up questions about whether you really know the Spanish word for “horse.”  Which machine will evaluate your student’s interpretation of Cervantes’ message at the center of Don Quixote? 

In my discipline, I have evolved into something of a heretic.  I no longer see the point of lecturing at all when the vast majority of information I can convey could simply be Googled up on demand whenever a student potentially needed it.  Granted the average Wikipedia page wouldn’t be as good as my lecture, but it would be good enough for most students’ purposes.  The same thing would be true of whatever automated MOOC lecture a student might watch.

Rather than lecture every day, I now try to teach history as a process – namely the research process.  Even in my online survey class, I guide students through source acquisition, evaluation and the writing process so that they can appreciate where history originates rather than simply memorize facts that might help them win on Jeopardy someday.  In all of my classes, a lot of those efforts involve digital tools that can help students contribute to the vast pool of historical knowledge rather than make believe that that pool of knowledge does not exist so that I can continue to run my class like history professors did during the late-twentieth century. 

In this day and age, every class on a university campus has to be about the Internet to some degree or another because the Internet permeates so many aspects of modern life.  To ignore that in your classroom is clear evidence that you are not equipping your students with all the knowledge they need to thrive after graduation.

In terms of faculty research, I know that computers can write something that passes for symphonies now, but they still can’t visit archives and go through boxes.  Scan every document in every archive in the world and you’ll still need humans to go through all those documents in order to craft some of them into a compelling narrative.  Of course, engineering and science research will never be automated because that’s where the money is.  Automation in this instance would be just another excuse to establish the “humanities in crisis” narrative that predates the Internet, let alone the possibility of faculty robots. 

Win/Lose

The idea of automating service is a goal that both administrators and faculty could conceivably get behind.  After all, who likes going to meetings?  Let the robots decide the best way to keep the university’s lights on and let me go back to my research and teaching.  On the other hand, committee meetings are the place where the faculty most often exercise their role in shared governance – perhaps the most important thing about colleges and universities that separates them from other places of employment. 

In academia, despite a long tradition of faculty autonomy, the barrier between the professional and the personal has become increasingly hazy because of technology.  There are now countless examples of faculty who have gotten into trouble for things they have tweeted in their capacity as citizens which have gotten them in trouble to one degree or another in their professional capacities.  This threat to academic freedom has come about because of technology, and that same technology offers our employers an added incentive to replace us.  Service is one thing that on-campus professors do that separates them from remote faculty.  Automate that process and the migration to offsite labor will accelerate.

The situation is different for faculty who choose to perform any aspect of their duties through technologies that their employers don’t control.  I send most of my professional emails through Gmail rather than my university account.  Gmail is far more useful to me than the one my university provides on the basis of storage space alone.  As a program, I think it’s also organized more logically than the Microsoft product that I’m supposed to use.  In exchange for access to Gmail, I let Google mine the words they write so that they can show me better-targeted advertisements.  I get something good for no money, but in exchange I give up a little bit of my privacy.

In his book Winners Take All, Anand Giridharadas notes that Silicon Valley types refer to this kind of thinking as a “win/win” situation.  For that to be true, the victory of the consumer wouldn’t have to be complete.  If I don’t care at all about Google and its advertisers knowing the topic of my emails, then I’m certainly better off using Gmail than an inferior alternative.  There are advantages and disadvantages to exercising this kind of autonomy, but the right to make these decisions is a right worth fighting for because the faculty’s very existence might ultimately be at stake.

This is particularly true when you consider technology in a classroom setting.  It is extraordinarily convenient for faculty and students to have homework modules packaged with their online textbooks, but what happens if your administrations prefer to cut out the middleman and deal directly with publishers?  After all, it’s publishers, not faculty, who can easily tweak questions.  They’re the ones who keep the course data.  Any administration could conceivably contract directly with publishers who would use faculty advisors from different campuses in order to centralize the writing of both content and exams. Faculty would then become nothing but glorified teaching assistants. 

This worst case scenario here requires predicting the future, but there are nonetheless plenty of examples of faculty losing their traditional prerogatives to technology that I can cite now.  The most omnipresent is the very existence of the Learning Management System (or LMS).  Invented in the 1990s so that universities could cash in on the distance education craze quickly, it has now become a stalwart presence in online and face-to-face classes alike.  On the one hand, it is a convenient way to exhibit copyrighted material in password-protected spaces and to show students how they’re doing at any point in the class, but most of them are extraordinarily difficult for faculty to customize.  We, in turn, are forced to change the way we teach to reflect the platform our administrators contracted for rather than bend the Internet towards whatever way we want to teach.

Apply job selection software to an academic setting and it becomes possible for a university’s  Human Resources department to oversee the selection of tenure track faculty, a process that was once the near-exclusive province of the professoriate.  Automate a process at the heart of a faculty member’s job – like essay grading – and questioning why we need faculty at all becomes practically inevitable.  The problem with these win/win situations in higher education is that faculty seldom win in the long run.  Once we give up a prerogative to our administrations through the use of technology it is going to be increasingly hard for anyone, especially future faculty members, to ever get it back. 

A Hostile Takeover of the Virtual Classroom

None of this means that all forms of academic automation are evil by definition.  About twenty years ago, I learned how to use Microsoft Excel and have used it ever since – not for my research, but for my grades.  I’m not a math guy at all, so it once took hours for me to generate students grades even when I employed a calculator.  Now, after writing one simple function at the end of every class, I can produce grades for any class I teach in about twenty minutes.  The lesson here is that faculty have to be the ones to decide when to automate parts of their work and which parts of their work should be automated.  The benefits from faculty controlling the way that technology gets used in their classroom involve not just creating better classes, but in improving both the morale and effectiveness of students and professors alike.

The problem with using Excel to calculate grades is that students can’t see their marks at any moment during the semester.  Yet the fact that online gradebooks hadn’t been invented yet didn’t prevent me from knowing my grade over the course of the semester back when I was in college.  I took the formula on the class syllabus, plugged in the grades I’d gotten to whatever point of the semester we’d reached and (despite my limited math abilities) figured out my grade myself.  I think it’s a good thing that students can both save time and will likely check how they’re doing more often over the course of the semester, but the fact that administrators can also see what grade students are earning has huge potential drawbacks.

The most benign suggestion that I’ve heard is that faculty should use the Learning Management System more often so that data from our gradebooks can be used to promote student retention argument.  In the long run, this means using big data to study the problem across disciplines.  On a more basic level, if the university knows when students are doing poorly, then they can send out warnings automatically, long before I even notice there’s a problem.  What I resent here is the idea that I may have to move my entire class onto a computer program that defines both the way that I interact with my students online and the structure of the entire class in exchange for a few days of early warning time and the other potential benefits of big data. 

This is not a win/win situation.  It is a hostile takeover of the virtual classroom.  Faculty and their administrators could probably work out a way to use their Learning Management Systems to improve student retention without too much trouble, but only if they are all sitting at the same table.  The problem is that if faculty don’t even recognize that their prerogatives are being violated, they won't  ask for a seat at the table, and their voice will surely disappear before too long.

In their 2014 statement on “Academic Freedom and Electronic Communications,” the American Association of University Professors suggested, “Online teaching platforms and learning-management systems may permit faculty members to learn whether students in a class did their work and how long they spent on certain assignments. Conversely, however, a college or university administration could use these systems to determine whether faculty members were logging into the service “enough,” spending “adequate” time on certain activities, and the like.”  It is not a big leap from that point to suggest that the failure to meet the goals set by monitoring software could be used to justify the replacement of teachers with artificial intelligence.

The most dangerous aspect of introducing new technology into college classes of all kinds is that it might convince both edtech companies and many college administrations that they know how to teach better (which often just means “more efficiently”) than we do.  When the decision to employ education technology is made exclusively by management, a structural imperative tends to move that technology towards its most evil iteration.  The battle for the academic means of production is a battle over priorities.  If faculty accept automation on its face for the sake of our temporary convenience, or have no role in its implementation at all, then we will have no right to complain if or when the robot professors actually arrive.

2 comments:

  1. AAUP has just come out with some recommendations regarding the protection of online teaching: https://docs.google.com/document/d/1FdPebPy37flYWTvIaWrKwXU_QkLPpERvMSuA4hw3hxs/edit?link_id=1&can_id=ef3f77169d9fda3f47e586e032edc02b&source=email-following-up-with-you-about-online-programs&email_referrer=email_530133&email_subject=following-up-with-you-about-online-programs

    ReplyDelete
  2. One is reminded of Twain's, "The report of my death was an exaggeration.” I like to show students - when the issue of technology forecasts comes up - a series of clips. The 1930 film "Just Imagine" showing life in 1980 featured flying cars. But none appeared in 1980. What did appear a couple of years later was "Blade Runner" with flying cars in 2019. Yet here we are in 2019 and there aren't any. Meanwhile, the film "Blade Runner 2049" put the date for flying cars another few decades in the future. We were supposed to have sentient computers by 2001 - remember Hal - but we still don't. (On the other hand, Hal was a mainframe with poor graphics - apparently there were no laptop Hals in the fictional 2001. In fictional 2001, folks routinely flew in space on Pan Am and made telecalls on the Bell System.) Note that if in 10 years robots will be teaching your course, why wouldn't robots in 10 years also be TAKING your course? Now THAT is something to worry about!

    ReplyDelete

Note: Firefox is occasionally incompatible with our comments section. We apologize for the inconvenience.