“YOU GIVE A DOG A TREAT WHEN HE PERFORMS WELL AND HE’S happy. He doesn’t remember that the treat came with a price tag, that it’s for performing something up to expectation. It’s all jumbled—the warm fuzzy feeling, how good the treat tasted, the affection, approval, the pat on the head, ‘Good boy!’ the connection, the bond. He’s a happy dog.”
Professor Alexander Coward once again runs his fingers through his sandy hair. The look is tousled, the gesture anxious.
A boyish smile dimples his right cheek, as if he is remembering some beloved, faraway family pet. Momentarily, his sadness dissipates. It’s easy to see the charisma and empathy that made the four-hundred-seat lecture hall at University of California, Berkeley, overflow with students clamoring for a spot in Math 1A.
Students rarely clamor for Math 1A. It’s the introductory course, the one that’s required for several competitive majors. It’s known as a flunk-out course. Coward is lost in his thoughts. He’s comfortable there, residing in silence. When he speaks again, sadness has returned. “Humans aren’t like dogs. If there’s both encouragement and a reward, it becomes a disincentive. You wouldn’t think it would work that way, but it does. Instead of a warm fuzzy feeling, we get cross and cynical if we think we’re not learning for our own good but for some external reward. We start thinking we deserve the reward and feel cheated or demoralized if we don’t get it. Maybe it’s because we keep having to take tests from an early age. We’re always being graded, compared with everyone else, overtested so that we grow to resent it. Pretty soon the grade is what we strive for, not the learning. It becomes what counts—even all that counts. We’re not learning to improve ourselves. It’s not both/and the way it is for dogs. If you try to give both, it’s less than either.”
Coward is a math professor at Berkeley. Or was. In 2016, his contract wasn’t renewed. Before that, when he was still teaching Math 1A, he worked hard to apply the best pedagogical principles of active, engaged, motivated learning to help students master math. His Math 1A was no standard, dry lecture class but rather an intellectually engaged course that inspired curiosity and encouraged students to stick with the subject. His sections filled up fast. Whereas some Math 1A professors had a hundred students, his sections swelled to overcapacity with four hundred students. He didn’t give quizzes to ensure the students were doing the homework. He didn’t give homework in the conventional sense. Instead, he inspired his students with ever more difficult problems and excited a love of math several didn’t know they had. In the end, when they were tested, his students did great, exceeding the demands of the course set by the department.
He should have been a hero. As he came to see it, though, there was only one problem. His students started enjoying math a little too much. They started learning the math for its own sake, not simply to pass an often-dreaded prerequisite for highly selective majors such as electrical engineering, computer science, physics, biology, economics, or even the technology-focused undergraduate bachelor of science degree offered by the Haas School of Business. Students who couldn’t enroll in his course registered for math classes with other profs but then came to sit in on his course for no credit. Even when a departmental adviser warned him that his exam was too difficult, Coward was confident his students would do well—and they did, without any class time devoted to how to game the test to earn the best scores. Coward adamantly refuses to “teach to the test.” He teaches for the love of math, and inspires that love in his students.
Coward’s ideas on how to teach run in the opposite direction of much contemporary education practice, where test scores have become a stand-in for actual mastery or knowledge. In K–12, teachers today can be denied merit raises if their students fail to obtain high enough scores on high-stakes standardized tests. High schools, like colleges and universities, boast about the average SAT scores of their students. Universities are ranked partly on the average SAT scores of entering students and on Graduate Record Exam scores of graduates. It’s fair to say that we have become a test-obsessed culture on every level. At a top school like Berkeley, students need superb grades and test scores to get in and equally top grades to be admitted to the most selective majors. Much current education is geared toward improving test scores. By contrast, Coward bases his pedagogy on the work of researchers who have found that tests and grades can actually undermine learning and even performance. He is interested in the research on what makes for optimal learning, especially comprehension and an ability to apply the math principles one learns to other areas of one’s education and one’s life.
Coward is most influenced by the classic studies conducted by British psychologist Ruth Butler and published in the late 1980s. Butler investigated how over 130 seventh graders in twelve classes at four schools responded to feedback they were given on tests requiring complex thinking tasks. Each day, the students’ work was collected and independent markers would give the exercises one of three different kinds of feedback. Some students received formative feedback designed to help them in the future (for instance, “You thought of quite a few interesting ideas. Next time, why not try to add a few more examples, maybe drawing from personal experience.”). Others received only a numerical score ranging from 40 to 99 (called “summative” feedback); they received no comments or explanations or guidance about the future. A third group received both formative and summative feedback, a helpful suggestion and a score.
Everyone assumed the more thorough response—the formative feedback plus the summative score—would be the best way to encourage students to learn and to do better on future assignments, but no one had actually tested the notion before. There were some surprises.
Those who had received only formative feedback showed the biggest gains, with a dramatic 30 percent improved score on a second exercise. This was not surprising in itself because formative feedback has long been shown to be the best way to improve learning. It’s similar to how we encourage learners everywhere except in school. If I’m teaching my son how to throw a fast ball, I might give him encouragement along with some feedback on the biomechanics of his windup or the placement of his fingers on the seams of the ball. I do not give him a summative grade, say, a B–, as he’s learning to pitch.
Those who had received summative scores did better on the second exercise only if they had done well on the first. This confirms extensive research on the limitations of summative feedback. If given only a number, students tend to assume they are “bad” at the skill or subject being tested, and so the low score tends to be a disincentive to learning. Students who do poorly don’t just feel they have failed. They feel that they are failures. Having a low opinion of one’s abilities does not often inspire one to pursue further learning.
There was another important insight from the high-scoring students who received summative feedback. Although they did better on the second exercise than they had on the first, they also quickly forgot what they had learned for the test. Unlike those who had received formative feedback, the students who had achieved high numerical scores had only short-term gains. In other words, even the ones who did best had simply “learned to the test.”
The biggest surprise for learning experts came from the blended third group. Those who had received the written comments (formative feedback) as well as the test score (summative feedback) showed no benefits from the formative help. The very presence of a grade canceled out the advantages of formative feedback.
Professor Coward throws up both hands again. He rubs his face. “We are not dogs. If we want to learn, we need feedback. Grades interfere with the process.” Coward challenged his students with problem sets all semester. He didn’t assign a textbook. He never told them what would or wouldn’t be on the final exam. He empowered them to learn, to gain mastery over the subject, to think like mathematicians. He tantalized them with thorny problems and seeming contradictions, and rewarded their ability to think through the most challenging mathematical conundrums. He honored them by assuming they wanted to learn for their own good, for their future. There were no pop quizzes for surveillance purposes, to make sure they were doing all the homework. Instead, he treated them like adult, autonomous learners, who were taking the class not to earn a grade but to understand the basic fundamentals of math and how those principles could be applied to the rest of their learning. He used time-honored, student-centered learning principles that go back at least as far as John Dewey: grant your students control over their own learning process; have them work in teams or groups; encourage them to assess the validity of what they know; have them reflect on what they have learned and why it is important in other situations; have them teach one another; assist strategically when they are flailing; step out of the way when they are progressing again; and recognize their achievements.
He was reprimanded for his unconventional methods. No one had seen so many students flock to Math 1A. Aspersions were cast on the rigor and content of his teaching, as if students couldn’t possibly be that excited by a truly demanding course. But it’s math. When the students in his courses scored just as high or higher than students in other sections of Math 1A on departmental finals, his popularity could not be dismissed on the grounds that the course was “easy.” Instead of the department being curious and even admiring about his methods, it warned him about deviating too much from departmental “norms.” He responded to the criticism by working even harder to ensure his students were learning. He gave out his cell phone number and invited students to call if they ran into problems. He carefully explained fundamental principles and, even more, encouraged them to learn the principles themselves, including by teaching one another, challenging themselves with ever harder problems. The next time when he taught the introductory course, even more students signed up for his class. Many who couldn’t get in officially just came anyway in order to learn.
“What does it mean to adhere to department norms if one has the highest student evaluation scores in the department, students performing statistically better in subsequent courses, and faculty observations universally reporting ‘extraordinary skills at lecturing, presentation, and engaging students’?” Coward summarizes the problem the Math Department had with his teaching: “In a nutshell: stop making us look bad. If you don’t, we’ll fire you.”
Only two images decorate the walls of Coward’s spare, modest study, with its functional Ikea-style furniture. One is of Muhammad Ali from his fighting-trim days, in boxing trunks practicing in front of a mirror. Above the mirror are posters from Ali’s previous fights. “Champions are made from something they have deep inside them, a desire, a dream, a vision,” the legend says. “They have to have the skill and the will. But the will must be stronger than the skill.”
On another wall is a poster of Steve Jobs. There are words on it from his famous commencement address at Stanford, shortly before his death, the one that begins, “Your time is limited.” It ends with the admonition: “Stay hungry. Stay foolish.”
“I went on a bit of a shopping spree a few months ago because I decided I wanted some inspiring posters for my students, but as it turned out I needed them for myself,” Professor Coward says, almost in a whisper.
2016 was a tough year. In the month after Math 1A ended, his students received their final grades for the course, and he received a figurative pink slip. His time as a math instructor at Berkeley is over. On his Facebook page, students leave him warm and fuzzy goodbyes, tinged with sorrow that he won’t be returning to Berkeley.
We constantly hear the lament that the United States has a STEM crisis. Nationwide, less than a quarter of high school graduates who say they want to study STEM fields in college make it. Most transfer into non-STEM disciplines by sophomore year. Nationally, a poor grade in an introductory math class is what is most likely to prevent a student from pursuing a STEM dream, whether it is to become a doctor, a nurse, or an engineer. Our major universities are structured to limit the number of students who do well enough in introductory courses in math or organic chemistry to pursue future STEM careers. It’s hard, in such a structure, to even know what to do with a superstar teacher like Alexander Coward, who takes the big introductory course—typically known as a flunk-out course—and turns it into an intellectually exciting, inspiring course for hundreds of students.
This brings us, once again, back to the origins of the research university and its infrastructure that was carefully designed to provide a path toward professionalization and credentialing for a range of new or evolving fields. We see this in the pyramid structure of most colleges and universities today, a wide base tapering to increasingly specialized, disciplinary, and preprofessional knowledge. To earn a spot in some specialized majors (electrical engineering or molecular biology, for example) requires earning a top grade in an introductory course. Flunking out, so far as a future career in one of the sciences might be concerned, might be earning a B+ instead of an A. At large universities, the general introductory courses tend to be lecture courses taught by beginning professors, low-paid instructors on one-year contracts, or adjunct professors teaching part-time. Or they are taught in weekly lectures by famous professors, with many smaller discussion sections (including the grading of papers and exams) run by graduate students. These courses typically sort the best students from the rest and determine who will go on to be admitted to upper-division, specialized, disciplinary, preprofessional work in a major. Upper-division courses are often far smaller, sometimes even offered as tutorials and independent studies with senior members of the department. The top students graduating in that field might then decide to go on to the even more specialized work in graduate school or professional school, often pursuing the specialization of their adviser.
Often the big introductory lecture courses have departmental final exams, with the percentage of passing or A grades fixed by a departmental consensus. If too many students are taking and doing well in math, it changes enrollments in other STEM majors throughout the university. That means, if too many students are doing well on a standardized test, a department can either make the test harder or change the grading curve. If the objective is to work toward increasing specialization and professionalism, this system works. If the objective is to expand a field or to address the shortage in STEM training, this system is counterproductive.
It is also unproductive if one’s mission is to actually teach each and every student to understand a subject better than they did when they entered the course. That was Alexander Coward’s predicament. He was not particularly concerned with the institutional or historical reasons behind the design of the large introductory courses like Math 1A. He is passionate about math and its importance to anyone who wants to survive in school and outside of school. During his time teaching Math 1A, he worked hard at his teaching and was unstinting of the time he was willing to spend with students. He really didn’t care about their grades (although, in fact, his students did very well on their final exams). He was dedicated to making sure that each and every student who wanted to learn the basics of mathematics did so. He wanted them to learn to think like mathematicians. If his students also earned an A at the end of the process, he would applaud that, but for him, good grades were secondary to the purpose of teaching math to students—lots of students.
NO ONE READING THIS BOOK WILL BE SURPRISED TO LEARN THAT the apparatus of grading and judging, assessing and failing students—including bell curves, grades, and standardized testing—were fully developed in the nineteenth century. All of these innovative and supposedly scientific assessment methods were quickly absorbed into the new education of Charles Eliot and his colleagues. These were yet another creation of the industrial age, designed to remake the long traditions of formative feedback (admittedly, sometimes meted out with a switch). The methods we still use for evaluating student achievement were adopted from quantifiable measures of productivity developed for factories and the brand-new assembly lines.
It was a nineteenth-century scientist, Sir Francis Galton, who invented the modern science of statistics, developing the now familiar ideas of standard deviation and deviation from the norm. When applied to grading students, the model allows teachers to place each student on a curve relative to the others, such that there is a specific percentage of top students, middling students, and failures. It was a new way of grading—and eminently scientific, it was claimed at the time. The bell curve is a prime example of institution-centered learning because it does not base assessment on how much an individual student has actually learned. Rather, a different number of students might be permitted to receive the top grade depending on institutional goals. For example, one year a department might want to award the highest honors (say, an A) to the top 10 percent of students. Then, if there is less funding or too many applicants the following year, that department might set the bell curve so that only the top 2 percent receive honors. It’s not that students know more one year than another but that the “cutoff” for excellence or for passing or for failure varies depending on preset criteria as determined by institution-centered needs. In this case, assessment is not designed to help the student but to control the number of students passing through the system. One year, for example, several faculty might be away on sabbatical and the department might not want too many A+ or honors students going on to the advanced, specialized courses. Another year, the opposite might be the case. Before these “scientific” measures came into play, instructors wrote out comments or rendered verdicts based on an oral exam, recitation, or performance. Teaching and learning were by rote, of course, and students still could fail to pass a course, but failure was not preset (what’s called “norm referenced”) according to the predetermined percentage of students who should or should not be allowed to succeed relative to one another.
To demonstrate the efficacy of his bell curve, in the 1890s Francis Galton invented what he called the “bean machine,” a device in which little balls drop into a grid with interleaved rows. The balls fall down the chutes and are collected in compartments—the fewest at the pinnacle, more at the sides, and then few again at the lowest point, with all of them falling into a pattern known as the bell (or Gaussian) curve. In Galton’s bean machine, moving the levers one way or another adjusts the curve, determining how many balls fit into the top bin. The student ends up with a “mark” or a “grade” that seems to be objective, but of course it allows the person or institution setting standards to determine how open or closed the curve will be.
Galton’s bean counting (literally) may seem neutral and scientific—until one learns that, among other things, Sir Galton was a passionate eugenicist. He believed British aristocrats were genetically superior to everyone else and that the British government should subsidize procreation among the upper classes and sterilize the poor and working classes. One motivation for his bell curve and his bean machine was to be able to rate intelligence scientifically in order to bolster his theories of who would or would not improve the human curve.
The bell curve can also serve constructive purposes. It can be a tool an institution uses to see how well its students and its professors are performing. If used well, a bell curve can help an institution see, at a glance, who is succeeding and who is failing and compare instructors easily. If every instructor gives the same test to students from the same demographic and the standard is the same for all, then the institution can clearly see whether any one professor’s students are disproportionately achieving the pinnacle. Such students change the distribution; they visually, literally, spoil the curve. The bell curve can also become an end in itself, as if the objective is to achieve high scores rather than the excellence that those scores represent. The bell curve can easily be turned into a tool for wielding bias. Because we know performance correlates with training and because public education in the United States is funded locally and varies greatly depending on the wealth of a municipality, it is easy to think the top scorers are the most brilliant students, not those who are lucky enough to attend schools in the wealthiest districts. The bell curve is not inherently sinister, but it is important to recognize that even seemingly objective systems for assessment come with assumptions about values, worth, and humanity built in along with other ideas about the definition of intelligence, its innate character, and an individual’s ability to change, learn, or improve through education.
The rise of grading—as distinct from evaluation—is another improbable story with some strange components. We don’t have a precise record of who, what, when, where, why, or how the earliest examples of letter and numerical grading came into being; quite a lot of folk legends surround this history. The term grade seems to have come from the idea that students wouldn’t “make the grade” in the same way that an imperfectly manufactured product would “fail to make the grade” by falling off a sloped conveyor belt in an assembly process.
As for the first prophet of grades, some point to a British don at Cambridge, William Farish, who in 1792—the story goes—decided to issue points or numerical grades to the students who wrote out commentary followed by oratory. Others give the credit to Yale president Ezra Stiles, a notably learned scholar who delivered his inaugural address to the student body in Hebrew, Arabic, and Aramaic. In 1785, Stiles personally graded the fifty-eight seniors at Yale: “Twenty Optimi, sixteen second Optimi, twelve Inferiores (Boni), ten Pejores.”
The term grade had another early educational usage. “Graded,” “age-graded,” or “grade” schools arose in the third decade of the nineteenth century, as uniformity began to be praised as an educational objective. More and more schools placed children in grades according to their chronological age, and compulsory public school laws began to designate the age at which a child had to begin school. At the time, it was almost a fad to group like things together. Eggs, for example, began to be arranged, graded, and sold by uniform size (“Grade A” eggs). Kids aren’t eggs, though. So they were sorted by age, not height or weight. Sorting by a cutoff birth date reduced to one measure the constellation of factors that might otherwise determine whether a child was ready to start school. Kids were sorted not by ability, interest, or emotional maturity. A child entered school with other students who had passed an arbitrary threshold.
By the last decades of the nineteenth century, educators began experimenting with letter grades to represent educational achievement. The first institution of higher education to create and implement a system of letter grades was Mount Holyoke, in 1897. America’s first women’s college adopted them as part of its effort to move to the forefront of modernizing education for modern women. Yale, Harvard, William and Mary, and several other colleges in the late nineteenth century also experimented with different ways to “standardize” written evaluation and commentary through a single metric that stood for everything a student had learned in a given term. Harvard tried a twenty-point system and, later, a hundred-point scale. William and Mary had four groupings, from excellent to failure.
A controversy arose in response to Mount Holyoke’s decision. Interestingly, it was not about whether it was wise or desirable to reduce the complexity of handwritten, discursive feedback to something as simple as a summative letter grade. The controversy was over E as the failing grade. Although A, B, C, and D had no referential value, the fear was that students would fail their courses but somehow palm off their E grades as designating either “Excellent” or “Effort.” It was important to have a clearly marked failing grade, with no room for ambiguity. So Mount Holyoke adopted the F, justified by the Anglo-Norman failer, meaning nullification, nonoccurrence, or failure. Other schools soon followed suit.
Apparently, University of Illinois professor of agriculture Herbert Mumford introduced the idea of grades to the American Meatpackers Association soon after Mount Holyoke adopted them. The meatpackers worried, though, that it was difficult to reduce something as complex as the quality of sirloin or chuck to an A, B, C, D, or F. So, from the beginning, they insisted that, along with the grade, the written comments of the meat inspector be tied to each and every piece of meat. What we would now call the “metadata” traveled with each piece of graded meat.
Strangely, educators were less skeptical about applying grades to student learning. They were quick to take on a variety of standardized measurements that reduced the complexities of intelligence, aptitude, and achievement to a single alphabetic or numerical score.
The different forms of testing have a history as complex as that of bell curves and grades, and also have unsavory elements. The development of IQ (intelligence quotient) testing is typical in certain respects, although also the most troubling. In 1904, psychologists Alfred Binet and Théodore Simon were commissioned by the Ministry of Public Education in France to design tests to identify and diagnose children who were struggling with the French academic curriculum. They used the word intelligence to describe what they were testing for and were careful to define the term according to the older sense, meaning understanding; the ability to grasp a concept; sagacity; or aptitude. The Binet-Simon test was not intended to measure some fixed biological attribute but, rather, was supposed to be a diagnostic aid that could help teachers improve student outcomes.
Binet’s views were disregarded. Within a year of his death in 1911, other psychologists had devised equations that calculated scores on the Binet-Simon test against age to yield an “intelligence quotient.” In World War I, Robert Yerkes, president of the American Psychological Association, and Edward Lee “Ted” Thorndike, an eminent behavioral psychologist, used the new IQ tests on more than a million recruits to determine who possessed enough intelligence to serve as officers. Both were eugenicists and believed ethnic groups had particular genetically programmed intellectual abilities. They believed the IQ tests supported the conviction that Jews, Italians, Irish, eastern Europeans, and African Americans were intellectually inferior to native-born, English-speaking, Anglo-Saxon Americans. The tests were also adopted by the US immigration service and contributed to the passage of the exclusionary Immigration Restriction Act of 1924. Oddly enough, the same tests that were deemed objective and scientific for categorizing race and ethnicity had to be recalibrated and weighted by gender. Women, who were assumed to be less intelligent than men, inexplicably did just as well on the IQ tests as men.
In 1913, educator I. E. Finkelstein grew alarmed that thoughtful, expository evaluations of whole people were being reduced to test scores, letter grades, numbers, and statistical averages. He wrote an unsparing critique of these new practices. His criticism foreshadows virtually all modern denunciations of the reductive nature of grades, including Ruth Butler’s and Alexander Coward’s. Finkelstein argued that standardized grading deludes us into thinking it represents something real, important, objective, comprehensive, scientific, and true. As he put it, “Whether numbers or letters… we can but be astonished at the blind faith that has been felt in the reliability of the marking system.” Finkelstein wondered how human thought—messy, impartial, nuanced, inconsistent, changing, and, above all, complex—could be reduced to a single number or a letter, just like eggs or sirloin or automobile parts moving along on a conveyor belt. He protested that all these forms of summative assessment made knowledge a score, not a process that you would continue to develop and improve lifelong. How can grades, Finkelstein wondered, possibly be meaningful or equitable—or an inspiration to further understanding?
Charles Eliot was not immune to the lure of the supposedly “scientific” letter grading system. As a young chemistry teacher, he had departed from the recitation and oratorical mode of teaching to pioneer labs and deductive and inductive scientific methods, and he was among the first Harvard instructors to give up traditional oral exams in favor of written ones. The reforms he put in place as president of Harvard, designed for specialization, professionalization, and credentialing, were of a piece with the various grading and standardized testing impulses of the era. In the 1870s, during his presidency, Harvard experimented with a hundred-point scoring system for classifying students, in the 1880s there seems to have been a brief and incomplete experiment to use a letter grading system, and in the 1890s percentile rankings were adopted with a special classification for “merit with distinction.” Eliot was nothing if not a modernizer, and these changes grew out of his ambition to turn Taylorist scientific labor management theory, designed for mass production and the assembly line, into scientific learning theory for the new, modern university.
Standardized college entrance exams have a less controversial history than IQ tests, but their legacy is just as influential, especially as evidenced in the movement known as “accountability-based” standards or, sometimes, “outcomes-oriented” education. Because high school teachers are implicitly preparing their students for college, and grade school teachers are preparing students for high school, the test that would become the Scholastic Aptitude Test in 1925 has had a profound influence on all of contemporary education, not only in the United States but also worldwide.
As we’ve seen, in 1914 when Frederick J. Kelly wrote his dissertation “Teachers’ Marks, Their Variability and Standardization” at Kansas State Teachers’ College, he advocated standardized achievement testing in the form of one-best-answer, multiple-choice testing. He developed this method, first, because there was a teacher shortage and it allowed for efficient grading by the untrained, who could simply put a grade sheet over the test sheet and mark off the right answers, thus freeing up the teachers’ time. Second, he believed a standardized test could be diagnostic of basic skills. It would end “variability” (subjective judgment) because every test question would yield one and only one exact right answer and serve as a measure of how well schools and teachers were improving what he called the “lower-order thinking” of those who were from, in the terminology of the day, “the lower orders.”
As secondary education was becoming commonplace and as immigrants flooded into the United States, Kelly’s Kansas Silent Reading Test was used to speed up the grading process at a time when there was a teacher shortage. Less than a decade later, the College Entrance Examination Board adopted Kelly’s efficient but reductive form of testing as the basis for the SATs. Like the production of the Model Ts of the era, knowledge was standardized, grading was automated, evaluation was summative. Throughout contemporary education, preschool to graduate and professional school, it still is. We might call it high standards, but it is, more properly, standardization, a legacy of an era when standardization was scientific, exciting, and new.
ALEXANDER COWARD IS THE RARE EDUCATOR WHO HAS LITTLE USE for bell curves, grades, standardized measures, or teaching to the test. He cares about his students and he cares about math. An eloquent explainer, he likes to emphasize that the logic behind mathematics applies to all the areas of life. Calculus is the study of change, geometry the study of shapes, and algebra (the basis of all mathematics) the study of symbols and how to manipulate them. Having clarity on such fundamental principles is necessary in any contemporary occupation, whether a factory worker reporting on robotic productivity, a middle manager analyzing sales reports, an X-ray technician reading a CT scan, or an ordinary citizen trying to understand the welter of data visualizations, statistics, and polls cited by authorities on just about any topic. Coward believes that his heresy at arguably the greatest public university in the world is that he thought the essential job of a professor in a top math department was to help brilliant students learn. He thought it was a public good to keep the highest possible standards in an introductory math class required for many majors and to use formative feedback so that students truly learned the subject matter. Far beyond that, he was creating the conditions in which they could learn to think like mathematicians, learn to be mathematicians.
Berkeley is a great public university, arguably the greatest. It is extremely difficult to get into Berkeley. It is a notoriously demanding university and it has an impressive retention and graduation rate of over 90 percent, close to that of the exclusive, elite Ivy League universities. Berkeley’s commitment to excellence is legendary. In this extraordinary setting, the inspiring teaching happening in Coward’s Math 1A lecture hall still stood out as exceptional.
Coward reports that, in 2014, he was warned by senior department members to moderate his methods. He says he was advised to conform more to the ways that other profs taught. His students weren’t just taking the class because it was fun or easy, a “gut” course, in student parlance. They did well on departmental exams and retained and applied what they learned in his class. The Mathematics Department tracked the students who took Coward’s Math 1A class and measured their performance the following semester in Math 1B and compared it to the performance of students who had taken Math 1A with another professor. It turned out that Coward’s students achieved more in the second class by a small but statistically measurable margin than the other professors’ students. One will recall that, in the learning theories of Ruth Butler, formative feedback was shown to increase long-term results. Coward is certainly convinced that his careful, encouraging feedback had an impact on students’ ability to retain math beyond his courses.
If students are excited to be taking an introductory course that, at most universities, is greeted with dread, especially one as important as basic math, don’t we all want to know the secret? What was Coward doing that others weren’t? What can others learn from his pedagogical methods? Is his way of teaching math a one-off or is it a model that others might emulate? Given Coward’s success, one might think that there would be great curiosity about his methods within his department and within the broader institution. That does not appear to have been the case. Coward claims his department chair asked him, “If you had a job at McDonald’s and came along with all these new ideas, how long do you think you’d carry on working there?” Coward did not find this to be a helpful question, and continues to find it insulting and depressing that teaching in a stellar fashion would be compared to working at McDonald’s. (The Math Department maintains that, for privacy reasons, it cannot address Alexander Coward’s allegations about why he was let go.)
As Coward contemplates his bid to be reinstated at Berkeley, he is considering other potential career choices. Several of his friends in the field have left academe entirely for Silicon Valley opportunities that are incomparably more lucrative. These friends tend to think higher education is so hopeless that it cannot be salvaged. It’s obsolete, out-of-date, not in tune with the kinds of computational thinking students gifted in math need these days. Math is important in every field where data are relevant, but it is still taught as if its only function is to train future math professors, with rigid and partitioned specializations and subspecializations and antiquated distinctions between theoretical and applied fields that no longer pertain unless you are a math professor conducting and publishing research for other math professors. Even the academic disciplinary distinctions among math, statistics, computer science, and engineering are archaic outside the academy and constantly blurred in real-world practice. Yet, in the modern research university, these are not only separate fields but sometimes antagonist divisions, with highest prestige going to the most theoretical and abstract mathematics and a general academic condescension to the applied. Peer-reviewed publishing in these highly specialized fields is crucial to the reputation of scholars, departments, and the university as a whole.
At this point, Coward isn’t sure whether the university is worth fighting for. Perhaps there are simply too many obstacles to real teaching and learning. He’s not sure how some distinguished full professors, many of whom made their reputations in their twenties or early thirties, still pull their weight as scholars or as teachers. It’s not clear where students and student success fit into the larger disciplinary reputational system of a distinguished department.
Are those tenured, full professors the problem? Is it the fault of this increasingly rare breed that is so concerned about its own research agendas that they want only a handful of the very best, most persistent, most independent students so they can focus on the highest level of mathematics, on more publications and, where feasible, more grants? Is it the fault of the institution? The answer to these questions is both yes and no. Of course there are problems everywhere within inherited and legacy systems, including with stodgy professors, who are loath to change their ways. But material factors contribute to these problems. Specifically, the necessity of maintaining high rankings exerts a tremendously conservative pressure on any institution. If an institution tries something new that isn’t recognized yet in the field, or (heaven forbid!) if it attempts an experiment that fails, its rankings can plummet.
Rankings of a department and an institution are precious and costly. According to a study in Research in Higher Education, if a university were to try to rise significantly in the rankings in US News & World Report, it might have to spend millions or even tens of millions of dollars to compete with top twenty universities. Berkeley’s Math Department, in its enviable top five position, dares not lose its place; trying to crawl back to the top could prove financially disastrous or even impossible in a university that has suffered forty years of per capita cutbacks. The pass rate of students in introductory courses doesn’t factor in to prestigious ratings. Limited funds make it all the more imperative to have large lecture courses like Math 1A that serve as flunk-out courses so that only a few students make it into the far smaller, more selective advanced math classes taught by full professors. This funneling frees those full professors to publish articles, fulfill professional duties, and obtain grants for highly specialized research that keeps the rankings elevated. Only 12 percent of the University of California, Berkeley’s total operating expenses are covered by state support. The rest must be covered by tuition and external sources—typically, by sponsored research or grants earned, in rigorous competitions in the sciences and often with less than a 5 or 6 percent acceptance rate. The professors working toward those grants might defend themselves by noting that the Math Department at Berkeley is in the top five because they work day and night to produce peer-reviewed research.
They would not be exaggerating. Contrary to the image of the lazy professor who only teaches four or six hours a week and has summers off, every study of professorial labor shows they are among the hardest working of any profession. Every time a new study is undertaken, the reported workload increases. Currently, faculty work on average sixty-one hours per week annually. They work ten hours a day per workday and then ten hours every weekend, including summers. Full professors work longer hours than either associate or assistant professors. About 40 percent of that time is spent on teaching, in the classroom or preparing for class. That means teaching is pretty close to a full-time job in itself, without the other required components of research and university service. And it’s a solitary profession: most of what faculty members do they do alone, on campus or at home. An anthropologist conducting one study of faculty labor titles his findings “The Long, Lonely Job of Homo academicus.”
Even if professors are actually teaching a lot and spending a good portion of their time in that effort, the overall ecosystem of higher education does not reward good teaching in the same way it rewards (and requires) measurable “outputs”—peer-reviewed articles, books, professional papers, and grants as well as “citations” of their work in articles by their peers. These outputs are measured and documented by universities, another example of reducing intellectual merit to a standardized metric. Talk about a deterrent to innovation! From the beginnings of the modern American research university in Eliot’s day, teaching and student learning were not the central mission. Teaching more students to understand and even love math is, in a structural sense, not the objective of a world-renowned math department. Standards setting, measuring, assessing, and ranking are important to maintain not only top students but also top faculty. Measurable standards are connected to reputation, ranking, and accreditation. This approach is also about the replication of expertise. Full professors at elite universities such as Berkeley typically view their highest calling as preparing their students to become full professors at elite universities such as Berkeley. Rankings don’t track the students who flunk Math 1A, but they do encompass how many math majors go to graduate school and where.
As of this writing, Alexander Coward holds informal math office hours at the Free Speech Movement Café in the center of the Berkeley campus, where he continues to teach math for free. He’s begun signing up student mentors to help other students with their math and, so far, nearly fifty students have expressed interest. He has also taken a number of online programming courses from Udacity and is now programming every day, working on his own start-up company, an accreditation system that will help students validate what they learn outside of formal education. His approach in his new company, as in his tutoring at the Free Speech Movement Café, is to guide people through what they know, give them suggestions about what might have value in the world, and offer a platform from which others can then judge that work themselves. “No grades,” he insists.
EVERYTHING IN HIGHER EDUCATION IS GRADED AND RANKED: STUDENTS, professors, departments, institutions. If the bane of would-be innovators and risk takers in corporate America is the quarterly shareholders’ meeting, where one must constantly show an upward-trending balance and short-term gains, then the bane of like-minded individuals in higher education is accreditation, including rankings. The Carnegie Classification of Institutions of Higher Education, the framework with which every US college and institution is classified within comparable groups for educational and research purposes, both ensures quality and can stymie experimentation. All the data relevant to accreditation, certification, and ranking are available in the National Center for Education Statistics Integrated Postsecondary Education Data System, the dreaded “IPEDS.” Any slip in status is recorded there, for all to see, on a public website.
Grades rank individual students within an institution. Test scores rank individual students outside of and across institutions, ostensibly to give a clear measure of excellence on a national scale. Institutions themselves are ranked relative to other institutions, on many different kinds of criteria, including research productivity (mostly, peer-reviewed publications) of the faculty, grants obtained by the faculty and postdoctoral fellows, professional prizes, and placement of graduate students in tenure-track jobs. Each part of this process is overseen by accreditation bodies, most of which were established in Eliot’s day as part of the professionalization of higher education.
One criterion underlies all of the others: selectivity. And, for most institutions, selectivity is based on grades and test scores of individual students. To get into the best schools, a student has to do best on tests. To be considered a top school, colleges select the students who test best.
The circularity of the current regime of assessment promotes specialization. Scholarly experts decide which fields and subfields should be tested. Test-making experts design tests on those subjects. K–12 teachers base their instruction on high-stakes, end-of-grade standardized tests that feed into national testing standards like the SATs. Admissions officers select the students who perform best on those tests. It’s selectivity all the way down, in other words, and a selectivity that makes innovation, in a very literal as well as metaphoric sense, extracurricular: if it is not on the big standardized tests, it is not likely to be a core curricular requirement.
If your institutional reputation is based, directly and indirectly, on how well students do on summative tests, then it is important to shape what you teach and how you teach to meet the parameters of the test in order that your students achieve the highest scores possible. Yet we know this results in an impoverished and ineffective form of learning. If you want students to retain knowledge in an applicable, useful form that serves them beyond the test, then teaching them simply so that they can ace a standardized test is the worst way for them to learn. Conversely, teaching to the test hones the educational process to reductionist perfection.
Fortunately, there are institution-wide alternatives. Consider Hampshire College. In 2014, Hampshire did the seemingly unthinkable: it did away with grades and standardized tests altogether. Faculty and administrators decided that, instead, they wanted to select students based on a variety of holistic factors that would promote an atmosphere in which everyone could learn, innovate, and thrive together.
The decision was historic. Hampshire College is the first institution of higher education to refuse to accept SAT or ACT scores. It didn’t just make them optional. It banned them. It won’t look at them. It doesn’t record them. The test scores are not in any way part of the admissions process.
Hampshire College is located in Amherst, Massachusetts, in an education-rich corner of New England, near Amherst College, Smith College, Mount Holyoke, and the public University of Massachusetts, Amherst. It was founded in 1970 as one of the “five colleges” and, from the start, it was an experimental alternative to the other schools. Hampshire faculty don’t grade students in their courses but instead give them long narrative evaluations. Students there graduate without GPAs, yet employers, graduate schools, and professional schools still eagerly pursue them.
Hampshire often sets trends. Opting out of SAT/ACT for admissions has consequences, though. As Hampshire president Jonathan Lash wrote in a public statement a month after Hampshire banned SAT and ACT scores from prospective students’ applications, “You won’t find our college in the US News & World Report ‘Best Colleges’ rankings released this month.… That got us kicked off the rankings, disqualified us, per US News rankings criteria. That’s OK with us.”
What happened when Hampshire jumped off the rankings treadmill? An increase in “yield” (percentage of students who accept the offer to attend Hampshire) from 18 to 26 percent was the first, eye-popping result. That surprised everyone. President Lash believes Hampshire is reaching more effectively the very special kind of student it wants by making it harder to apply, requiring more essays, not substituting one reproducible standardized test score for actual quality. So, applications have declined, quality has increased, and so has yield—making the application process itself less expensive to administer.
Not only that, but diversity (as measured by standard educational reporting rubrics) has increased by 21 percent, as has the percentage of first-generation students and low-income students. Hampshire educators theorize that, unlike SAT scores that correlate with expensive test preparation (either in affluent school districts or Kaplan-like after-school programs), it is possible that when students represent themselves without benefit of SAT scores, other highly desirable talents, skills, and accomplishments emerge that may not be about affluence but about originality and what’s been termed “grit.” Hampshire’s admissions officers are also convinced that the new system is a better match for the kind of independent, self-motivated student the college seeks. When Hampshire surveyed its past and present students, before eliminating standardized entrance exams, it discovered that not one of them had considered the US News & World Report rankings before deciding to attend.
A couple of years after the decision, President Lash and the faculty and students at Hampshire remain pleased with the results. A survey of recent applicants shows that they believe Hampshire has the most humane and interesting application system in the country. (Obviously there is some self-selection at work.) It is designed so students reflect on their purpose in going to college in the process of applying. It becomes a learning experience in itself. President Lash insists that Hampshire remains “deeply committed” to its strategy and often hears from students and their families who admire its stand on SATs. He is certain that they have better applicants who are “more committed to Hampshire because of what it uniquely offers.”
And Hampshire is certainly committed to its students. With only fourteen hundred undergraduates, it is nonetheless in the top 1 percent of all colleges for placing its graduates in doctoral programs. More than 50 percent of its graduates go on to earn at least one graduate degree.
We should all hope Hampshire’s bold experiment isn’t just a one-off but a trend or, better, the beginning of a new and better way. At the very least, the school has lived up to its motto: “To Know Is Not Enough.”
HOWEVER BOLD ITS INNOVATIONS, IT’S NOT SURPRISING THAT A small, young, private, alternative institution would be leading change in higher education. It’s simply easier for a school like Hampshire to implement large-scale change, in part because everything it does is inevitably on a small scale. But what if the tyranny of grades and standardization could be resisted at even the largest public schools? That is the question being asked at the Meadowlark Retirement Community in Manhattan, Kansas.
Today happens to be Taco Day. “Your party will be down as soon as class is over,” the receptionist tells me. She points to a large round table with crisp white table linens at the opposite end of the dining room, beyond the buffet line. “The students usually sit with the other residents, but today they reserved a table for your visit.”
A number of people in the Meadowlark dining room wear purple. Pretty much everyone in Manhattan does. It’s the school color of the Kansas State Wildcats. The university dominates the town, and the residents embrace it fully. Almost all of them attended K-State themselves or are related to someone who did. Even residents who went to rival institutions express admiration. Social, intellectual, and cultural life—everything down to the traffic patterns in this town—is shaped by what’s happening at the university.
I’m here to interview students in a class called “The Anthropology of Aging: Digital Anthropology.” I’m looking for alternatives to massive lectures that function as failure mills, to grading-obsessed departments, to institutions so determined to cut costs that they’ve forgotten their mission of preparing the next generation to be responsible, independent-minded, dedicated, and wise inheritors of a complex and sometimes baffling future.
That’s a big job. It cannot be left to small, dedicated institutions like Hampshire College—founded fifty years ago on principles of student-centered learning—to shoulder the entire future.
That’s why I visited K-State, a very different institution from Hampshire—and from a distinguished, highly selective massive public institution like the University of California, Berkeley. K-State is effectively open admissions, accepting 97 percent of those who apply. There are more than twenty-one thousand undergraduates, mostly in-state, and the tuition is under $10,000 a year for these students, low compared to many state universities, but higher than it’s ever been. Manhattan, Kansas, has a village feel; an exuberant, almost aggressive friendliness obtains. The graduation rate at KSU is about the same as at most CUNY colleges, around 25 percent in four years. That may seem dismal until you realize that the typical student at KSU, as at many other large public universities, holds down at least one full-time or several part-time jobs while attending.
“The Anthropology of Aging: Digital Anthropology” is taught by Michael Wesch, one of the most famous profs at K-State, or anywhere, for that matter. He was the 2008 Professor of the Year, an award given by the Carnegie Foundation for the Advancement of Teaching. It’s the Pulitzer Prize of academe, awarded to inspiring undergraduate teachers. Wesch is known for his courses as well as for influential YouTube videos about higher education that have been viewed well over 10 million times.
My personal favorite, “A Vision of Students Today,” has had more than 5 million views. The video begins with a grainy, slightly sinister shot taken from the entrance to an empty lecture hall, all noirish black and white. A quote from Marshall McLuhan appears: “Today’s child is bewildered when he enters the nineteenth century environment that still characterizes the educational establishment where information is scarce but ordered and structured by fragmented, classified patterns, subjects, and schedules.”
This is the university we’ve inherited from Charles Eliot’s time. McLuhan knew what a poor fit it was for his world—in 1967. Obviously, much has changed since then. Except in formal education. Youth are still being graded into passivity and a state of fear by standardized classes that deliver standardized answers that can yield good results on standardized exams that have only marginal applicability to their lives beyond school.
In an age when even our toasters collect our data, in formal education, of all places, we still gather data on student learning and achievement as if it were 1914. Students today are lucky their skulls aren’t measured with calipers, the bumps on their heads counted and diagnosed. Someday our current, standardized ways of assessing learning will be relegated to the dust heap of pseudoscience history, just like phrenology.
The two hundred students in Prof Mike Wesch’s lecture course didn’t just view “A Vision of Students Today.” They made it. Instead of taking exams and writing term papers, they worked together on this one semester-long collaborative class project. In this video, the camera pans to a traditional, lifeless lecture hall. Now there are students in the seats, facing forward, looking toward the blackboard on which is written a question: “If these walls could talk, what would they say?” Silently, the students supply the answers, each one holding up a piece of notebook paper on which is scrawled a response. Their faces are grim. “18% of my teachers know my name,” one piece of paper says. “I complete 49% of the readings assigned,” says another. “I buy hundreds of dollars of textbooks I never read.” “My neighbor paid for class but never comes.” These bleak statements together offer a rough approximation of the state of higher education today. Far more encouraging is the backstory of how this video was made. Prof Wesch challenged his students to work collectively, to turn the large lecture class into a video production company. They pursued research in many different forms, finding answers in books and articles, archives, through data analysis, and through social science survey methods and ethnographic interviews. They used digital tools to host and share their data sets, and they learned how to collectively write a script, shoot a video, edit, and then release and advertise it to the public.
What kind of class is this? Anthropology? Business? Data analysis? Research methods? A filmmaking class? A digital literacy course? All of the above. It takes all of these skills to learn how to work together, to manage a project from an idea through implementation and share research in a way that makes a compelling story, to produce it, edit it, publish it, and get it out there in the world where it can have an impact, a palpable impact. A note held up by one of the students informs us that the students made 367 edits to the evolving script of “A Vision of Students Today.”
What jobs will these students have when they graduate? They don’t know. But the experience prepares them for whatever they do in ways far more significant and lasting than what they might learn in a more traditional lecture course.
One of Wesch’s students, Jordan Thomas, finished his course, took a gap year from K-State, and rode his bicycle from Kansas to Colombia. Most people he told about his plan to tour South America alone, on a bike and by hitchhiking, thought he was crazy.
“I’m not crazy. I just have questions,” he likes to say. When he returned home, he coproduced a video about his experience, “To Live in This World,” that was shown at an international film festival in Paris. He went to live in Taos, New Mexico, for a time, living and studying traditional agricultural practices in the Taos Pueblo Native American community. Then he came back to K-State to finish his degree, and applied for a Marshall Scholarship to study alternative food systems and sustainability at Oxford. He won this extremely selective and internationally renowned scholarship in 2015. The gap year paid off in the form of a highly coveted scholarship and in a range of skills that will take Jordan Thomas far.
Jordan is Mike Wesch’s student. But, as Wesch tells me, beaming with pride: “Jordan’s my teacher.” Student as teacher, teacher as student, in the famous terminology of Paulo Freire, the fountainhead of student-centered learning.
I fill my plate at the taco bar and wait at the dining table for the students taking Wesch’s “Anthropology of Aging” class.
This year, Wesch’s students plan to make a video game together. But unlike most games, this one has a serious purpose: to teach people who play it about end-of-life decision making. It is an ambitious class, given that developing and engineering a video game is difficult and only a few of these students are computer science majors. But that’s by no means the greatest test these students have to pass. To enroll in this course, they must commit to doing something highly unusual. They have to move out of their dorm rooms and live for a semester at Meadowlark Retirement Community.
Professional anthropologists would call this participant observation. The students have apartments among those of the other residents. They have moved out of their comfortable dorms where the Purple Pride football and basketball talk never stops and into a place where cribbage and NPR are social currency. They are giving up their student lives to study aging, living day in and day out among senior citizens negotiating life with dignity and independence in the face of pain, loss, tragedy, sickness, fear, death.
By the time I arrive midway through the semester, the students have made close friendships with the other Meadowlark residents. They are also totally absorbed in their research. They are conducting ethnographic interviews with the residents to learn more about life as a senior citizen. They are also taking classes and pursuing independent research in a number of areas that, together, constitute their “general education” across the curriculum at K-State: gerontology, pathology, psychology, neuroscience, public policy, law, demographics, neuropharmacology, and the business of modern American health care. “The Anthropology of Aging: Digital Anthropology” is about the furthest thing imaginable from Eliot’s vision of specialized, disciplinary, professionalized higher education.
Meadowlark Retirement Community is the right institution to teach them. It is as radical in its world as Wesch’s courses are in higher education. Annie Peace, director of Health Services at Meadowlark, tells me that Meadowlark is part of the “Household” movement in senior citizen living. The movement champions elder-centered living that focuses on autonomy, on independence, and on every resident being able to live as fully as possible within the limits of their abilities and desires. Its approach is not so different, in a broad sense, from student-centered learning. She likes having the students in residence and feels Wesch and his students are kindred spirits. “We’re fighting the same battle,” she tells me, “just at different points along the life cycle.”
It turns out that the typical retirement community, like the modern university, has nineteenth-century origins. Retirement communities derive from asylums for older adults and those who were destitute, mentally infirm, or dying—“shut ins,” they were often called. Just as compulsory public education evolved out of the factory workhouses, retirement homes evolved out of the old poorhouses, which were home to those (mostly women) who had become wards of the state once they were no longer “productive” and lacked offspring to care for them. As with most institutions, it’s hard to shake those dismal origins. “We’ve got our work cut out for us. We’re trying to change an institutional design that’s been in place for a long time,” Peace says.
At the end of the course, Wesch will file an official grade for each student, but their most meaningful assessment is a letter he writes to the entire class, reflecting on each student’s contribution—to one another, to their professor, to their school, to Meadowlark, and to the world beyond. No grade or multiple-choice exam could begin to cover all that these students will learn this semester. He knows they will have spent more hours on this course than probably any other of their college careers, and will have read deeply in new subject areas, researched topics afield from their particular major, and learned skills they never considered before (from statistics to programming).
At lunch, Chuck, one of the students, tells me about how Annie Peace sometimes invites them to weigh in on the problems she faces. He relates a story that starts with the state passing a new regulation that required a heavy fire door between the residents’ apartments and the shared entertainment areas at Meadowlark, where there’s a theater, game rooms, a library, and a tavern that resembles the set of Cheers. The problem arose because two men, both in wheelchairs, loved to come down to the bar after games to have a drink and talk sports together. With the new fire doors, they’d be forced to call a night nurse or watchman to let them into the bar. They were insulted. They wanted the obstacle removed. But the new safety regulation wouldn’t allow it.
Peace invited the students to share in devising a solution. They read the state regulations carefully. They looked at the pathway from the apartments to the tavern. They talked to the two men involved and learned about their sense of indignity and hurt pride from the new rules as they examined the construction of the men’s wheelchairs. And then they met with the resident engineers and contributed to the design of a system in which the wheelchairs could be fitted with remotes, the heavy fire doors with sensors. The men could have their autonomy, the state could maintain its safety standards, and Meadowlark could keep its accreditation.
“Boss fight,” Chuck smiles. “We’re putting it in our video game—it’s the kind of problem that we want players to confront and solve.” Because I know a little about game design and have sat in on design sprints with professional game developers, he tells me about the interface they are planning. What Chuck is aspiring to is complicated even for a pro. Their game will have to include multiple and malleable avatars with evolving role-playing scripts. Someone might play as a resident or a family member or a prospective resident or a doctor, nurse, administrator—or maybe a state regulator. That regulator might suffer a stroke and end up a resident at Meadowlark. An elderly resident could lose a younger family member to cancer. As a player, you would learn empathy for others but also experience vulnerability for your own avatar. Like life. Unpredictable. And very difficult to script and program.
After lunch, we go to the apartment of Robert, another of the students in “Anthropology of Aging.” The rest of the class is there. Robert is one of three students who pulled an all-nighter to draft the latest iteration of the storyboards for their game. The students are chattering about the progress they’ve made. It’s clear some are already experienced at programming code and markup language, while others are struggling to master HTML5 or are working hard on their digital animation skills using a suite of software tools. Despite their varying skill levels, all are helping one another even as they talk about gerontology research and report on some story a resident just told that would be perfect to build into the narrative of the game.
“So here’s the thing,” Robert says, “no matter what your condition, you still have to make life-and-death decisions. Even if you are losing your short-term memory, you have to understand rules and medicines and insurance forms on top of that. Last night, we argued a lot about whether people should get extra points if their avatar has cognitive limitations, is making life decisions without the capacity to make clear decisions. We thought they should, then wondered if that was fair. What if someone is losing abilities but was a lawyer before they retired? How do you balance those things?”
“We plan to put everything difficult that we’ve seen here into our video game,” Karl, who also worked through the night, says. “We want to put players into situations that require ingenuity and empathy. You need both to win the Game of Life.”
At least two of the students in Wesch’s class hope to work at professional video game companies when they graduate. They both have part-time jobs as coders and speak computerese like a second language: HTML5, JavaScript, C++, BrowserQuest, Unreal Engine 4. One of the computer science students wants to be sure that, if you play the game, your own data can be secure and private. He’s concerned with how they need to customize the software to ensure this. Others in the class hope to enter health professions, and they want to make sure everything in the game is data-driven, based on actual, sound research. Two students tell me they aren’t sure yet what they want to do after graduation, but they are confident that, because of this class, they’ll be ready for whatever they decide to pursue.
Professor Wesch, in the corner, has a smile at the edges of his lips. His students huddle over the coffee table, drawing pathways, crossroads, vectors, all talking at once, working out the game play, as he holds back, lets them talk it all through together. It is from his restraint, his appreciation of his students’ autonomy, that one appreciates his pedagogy’s power. Like Alexander Coward and other great profs, he teaches by inspiring confidence in his students. He has set the conditions for an unparalleled educational experience here at Meadowlark, where these students are learning more than they ever could from a seminar or a lecture class where he performed the role of the professor in all the conventional, authoritative ways. They will draw from the knowledge and experience they gain in this course for the rest of their lives.
It is ironic—tragic, even—that Wesch’s impressive course is offered here in Kansas, where the assault against all of public education is relentless. State funding per college student is now 21 percent below what it was in 2008. Students at Kansas State pay $1,500 a year more in tuition than their older brothers and sisters who graduated from K-State in 2008. Faculty face new financial burdens and political pressures. In 2016, the governor and legislators aggressively cut corporate taxes and personal income tax for the wealthy, resulting in a massive budget shortfall. To make up the difference, another $19 million is coming out of the higher education allocation in 2017. This means more cutbacks to services, to faculty. It is positively heroic for a university and its faculty and students to innovate against such odds.
It’s not easy to remake higher education these days. In the cases of Alexander Coward, Hampshire College, and “The Anthropology of Aging,” we see different ways to work around the inherited structures of the university. In each instance, the mission is active, student-centered learning. Whether the obstacle is assessment methods, grading systems, external ranking systems, disciplinary division, the physical space of the classroom, or state-level cutbacks, Coward, Hampshire, and Wesch have moved away from the standardization and rigid professionalism designed for the needs of a society that no longer exists. Not everyone can succeed against the weight of a 150-year tradition. In Coward’s case, the obstacles were simply too great.
Several weeks after I leave Manhattan, Kansas, Mike Wesch sends me a copy of the letter he has written to the class. The students have given him permission to share it with me. It’s twelve single-spaced pages and recognizes each student, singling out something unique that each contributed. Wesch praises one student for a singular ability to inspire collaboration. Another keeps the group going with his generous sense of humor; another has discovered a talent she didn’t know she had: she began creating a visual diary to capture the intense dialogues in class and everyone learned from these “mind maps.” Professor Wesch also offers feedback on areas each student might work on.
“Somehow like any great group, we are more than the sum of our parts,” Wesch writes toward the end of his evaluation, in the section addressed to the class as a whole. “We needed each and every one of you. Working alone, we would have come away with individual ‘assignments’—videos, photo essays, maybe a simple game—most of them forgettable. Together, we created something worth remembering.”
What these students learned at Meadowlark is far removed from the nineteenth-century university. This class won’t help K-State top the rankings in US News & World Report, but it’s not rankings or GPAs or even majors and minors that truly matter in the end. Professor Wesch has provided a platform for learning. It’s been a “boss fight,” as the gamers would say. The students have grown and they have explored, challenged one another, sometimes failed, and then succeeded, memorably, all of them, together.