The use of standardized exams for finals at UT is accepted as normal practice. But do these high stakes exams, which can make or break a student, serve as proper indicators of a student’s knowledge and effort in class? Dr. David Laude, senior vice provost of UT and interim dean of the College of Natural Sciences, gives his insight.
LL: Do you believe standardized tests are an effective way of measuring student preparedness?
Dr. David Laude: My biggest problem with testing is the culture we have created in which the reward structure in classrooms and beyond is increasingly defined by the score you get on a test. This isn’t surprising given that college admission and other awards are so heavily weighted toward good test takers while winnowing out students who aren't necessarily interested in or capable of achieving perfection on tests. What becomes of the failed applicant to would rather tinker on a biology project in the garage than study for a biology test?
LL: What impact does testing have on students?
DL: I think it's damaging to our ideals about what learning should be. It's certainly not ideal that freshmen are being hammered with endless exams, papers and labs. I’d rather think about the students graduating from this university who have found that intellectual thing they love passionately — the tests they pass that really matter are when they earn the respect of a faculty member during an amiable conversation about a subject they both love.
LL: In some classes, failing an exam can mean failing the class, even with a passing grade throughout the year. Does having such high stakes exams serve to help or hurt one’s education?
DL: I find the temporal frame for assessment to be a very antiquated notion and long for the day it disappears. Granted, we operate within an academic calendar, but why is it that if a student understands a concept on Wednesday but took the test on Tuesday night they have failed? I also favor allowing students in my classes to take the final and have it count for everything in determining their grades. So what if they couldn’t calculate the deBroglie wavelength three weeks after school started if by the time the final rolls around they understand the material well enough to earn an A?
LL: Do you draw a distinction between multiple-choice final exams and essay-based exams?
DL: I don’t think free response and open-ended tests are inherently any better than multiple-choice tests in assessing knowledge. One thing is true, though. It's a lot easier to write an open-ended exam than to write a good multiple-choice question, where the heavy lifting is in the design of the questions and possible answers.
LL: In your opinion, what's the best approach to learning assessment?
DL: The best “money is no object” way to test is independent experiential learning like you see in programs like UTeach or the Freshman Research Initiative. These environments simulate the way adults learn through life experience — in my opinion, there is nothing better.
LL: What’s your take on math professors using fixed test banks like Quest to provide multiple-choice exams to students?
DL: Students bemoan the fact that they made a stupid mistake on a multiple choice question and didn’t get any credit, but I have yet to have someone tell me after the test that they didn’t deserve credit for a question they guessed on and got right. I think, in the end, these two imperfect aspects of multiple choice testing tend to balance out, and the test results correlate well with student knowledge at the time the test was taken. What they say about student knowledge a week later is another issue altogether.