Universities deciding whether to hire, keep or promote faculty use a mix of criteria, one of which is teaching. Teaching quality, in my experience, is judged mainly by student evaluations, to some extent those are supplemented by the views of faculty members who have sat in on a class to observe it.
Could we do better? In particular, could law schools--I currently teach at one--do better? Law schools have, from this standpoint, one significant advantage: The state bar exam, which most of their graduates will take, provides an external measurement of how successful their teaching has been. A second advantage is that, in the first year, all law students take pretty much the same courses and, where different sections of a large course, such as Contracts or Property, are taught by different professors, allocation of students is pretty nearly random.
This suggests a possible solution to the problem. Analyze bar passage rates to see if students who took Property from Professor X did, on average, better or worse than those who took it from Professor Y. If there is a significant difference, take that as evidence that one of the professors was a better teacher than the other.
There are some important limitations to this approach. Who taught a particular course in the first year is probably only a small factor in whether, three years later, the student did or didn’t pass the bar. Hence the evidence produced, even if real, is going to be very weak. It could be improved if it were possible to get bar results in a more detailed form--not just overall scores but scores on each question. One could then look for the effect of the property professor on questions that depended mostly on understanding property law, of the contracts professor on questions that depended mostly on understanding contract law.
A further limitation is that learning to pass the bar is not the only objective of law school. Professor Y, whose students do a little worse on the bar, might argue that he is spending less time than Professor X on material relevant to that exam, more time on material that will be important in the student’s future law practice. “Teaching to the test” is not, after all, an unambiguously good thing—although it becomes more defensible when the particular test is one the student has to pass if he is ever going to use what he has learned to practice law in the state he lives in.
How can this approach be generalized beyond the special case of the law school and the bar exam? Consider students who have taken the first course in a subject from a variety of different teachers but have taken a more advanced course together. Their final grades in the latter course will provide some evidence of how good their preparation was, which in turn provides some evidence of how good the first course was.
One problem with this approach is that students may not have been assigned to the first course at random. Perhaps there was some reason why, on average, Professor X started with better students than Professor Y. A second and more subtle problem is that how Professor X's students do in the second course depends in part on which of them take it. Perhaps Professor X presents the material as very difficult, scaring out of the field all but the best students--with the result that, by the time we get to the second class, we are comparing X's three best students with Y's thirty best. To try to control for such problems, it would be worth including in our analysis both other information on the students, such as their SAT scores (LSAT in the law school context) and also looking at how many students from each of the initial courses went on to take more advanced courses in the subject.
One problem with all of these approaches is that, if they are known to be in place and to have a substantial effect on hiring and promotion decisions, faculty members can be expected to try to game the system. If bar passage rate is used to measure success--not because it is all that matters but because it is the only relevant external data we have--professors have in incentive to teach to the bar exam, which may or may not be a good thing. If grades in more advanced courses are used, professors have an incentive to focus their teaching on only the better students and to try to encourage their best students into the field and their worst students out of it. Readers interested in an entertaining and intelligent discussion of the problem will find it in the first chapter of George Stigler's The Intellectual and the Marketplace, which describes the efforts of a (fictional) South American university reformer.
Could we do better? In particular, could law schools--I currently teach at one--do better? Law schools have, from this standpoint, one significant advantage: The state bar exam, which most of their graduates will take, provides an external measurement of how successful their teaching has been. A second advantage is that, in the first year, all law students take pretty much the same courses and, where different sections of a large course, such as Contracts or Property, are taught by different professors, allocation of students is pretty nearly random.
This suggests a possible solution to the problem. Analyze bar passage rates to see if students who took Property from Professor X did, on average, better or worse than those who took it from Professor Y. If there is a significant difference, take that as evidence that one of the professors was a better teacher than the other.
There are some important limitations to this approach. Who taught a particular course in the first year is probably only a small factor in whether, three years later, the student did or didn’t pass the bar. Hence the evidence produced, even if real, is going to be very weak. It could be improved if it were possible to get bar results in a more detailed form--not just overall scores but scores on each question. One could then look for the effect of the property professor on questions that depended mostly on understanding property law, of the contracts professor on questions that depended mostly on understanding contract law.
A further limitation is that learning to pass the bar is not the only objective of law school. Professor Y, whose students do a little worse on the bar, might argue that he is spending less time than Professor X on material relevant to that exam, more time on material that will be important in the student’s future law practice. “Teaching to the test” is not, after all, an unambiguously good thing—although it becomes more defensible when the particular test is one the student has to pass if he is ever going to use what he has learned to practice law in the state he lives in.
How can this approach be generalized beyond the special case of the law school and the bar exam? Consider students who have taken the first course in a subject from a variety of different teachers but have taken a more advanced course together. Their final grades in the latter course will provide some evidence of how good their preparation was, which in turn provides some evidence of how good the first course was.
One problem with this approach is that students may not have been assigned to the first course at random. Perhaps there was some reason why, on average, Professor X started with better students than Professor Y. A second and more subtle problem is that how Professor X's students do in the second course depends in part on which of them take it. Perhaps Professor X presents the material as very difficult, scaring out of the field all but the best students--with the result that, by the time we get to the second class, we are comparing X's three best students with Y's thirty best. To try to control for such problems, it would be worth including in our analysis both other information on the students, such as their SAT scores (LSAT in the law school context) and also looking at how many students from each of the initial courses went on to take more advanced courses in the subject.
One problem with all of these approaches is that, if they are known to be in place and to have a substantial effect on hiring and promotion decisions, faculty members can be expected to try to game the system. If bar passage rate is used to measure success--not because it is all that matters but because it is the only relevant external data we have--professors have in incentive to teach to the bar exam, which may or may not be a good thing. If grades in more advanced courses are used, professors have an incentive to focus their teaching on only the better students and to try to encourage their best students into the field and their worst students out of it. Readers interested in an entertaining and intelligent discussion of the problem will find it in the first chapter of George Stigler's The Intellectual and the Marketplace, which describes the efforts of a (fictional) South American university reformer.
10 comments:
I have fantasies of becoming an academic dean one day so I'm very interested in the issue of evaluating teacher performance.
One thing that I've noticed around here (med school and grad school) is that many students don't even bother showing up to class. They simply study from texbbooks, review books, syllabi and class notes. As far as I can tell, these students who essentially "teach themselves" do perfectly fine on the exams.
I was often told (in undergrad) that statistically students who show up to lectures tend to do much better than students who don't--I have a feeling this has a lot do with self-selection; showing up to lectures correlates with the desire to learn.
In graduate level education, I don't think that's true anymore...but of course that's only based on anecdotal evidence.
Is there a way to effectively measure teacher performance by controlling for students who basically "teach themselves" ?? I just don't know how we can measure this. Ideas please.
Gaping flaw in the plan here -- State Bar exams have little to do with what law professors can and should be teaching in law school. This would be apparent to anyone who had taken a bar exam. These exams are really just a tool for the lowest tier of legal service providers to keep prices above what they would be if paralegals were permitted to provide basic legal services.
The idea is good in theory -- but state bar exams as a metric of what should be learned about law are a terrible idea in practice.
This might be an interesting easy thing to do.
But if you are going to look at using data to improve the educational process then you should develop a full engineering process around it. One with many scales of time, almost fractal in nature. Start with question like "have the students understood the last 60 seconds of class?", and smothly zoom the questions out to "how much they retained 6 months after the class?".
You can scale out even further to the bar exam and other metrics. But those will be the least useful for correcting the problem.
While its focused on the opposite end of the educational spectrum from law school, check out:
http://www.zigsite.com/
Maybe you could try paying bonuses to teachers based on the average wages of their students 5 years from graduation. It's better than a simple measuring device producing binary incentives.
Average wages would need to be controlled for self-selection between academia, public and private practice as well as employment in quasi-unrelated fields.
The best idea I have come up with is to get other professors involved in each other's classes somehow. Then they can see first hand how each other are doing. At the high school and lower level, I would add parents to the mix, because they usually care more than the principles.
What "involvement" means is difficult to say. Some ideas would be: (a) read each other's message boards and; (b) sit in on each other's classes, as you mentioned; (c) team teach.
Looking at external exams is a good thing to do, too, but mainly as a sanity check as you and others have written. Usually a class aims higher than passing the exams, but you want to make sure the students aren't massively flunking out. Along these lines, it would seem nice to know what student incomes look like....
-Lex Spoon
Why don't law schools offer a bar prep course during the third year?
Arthur b.'s idea about giving a larger perspective in time to evaluatin teacher performance seems good. However, instead of inspecting graduated peoples' wages, a simple questionary about actual satisfaction with their life and how they see their former teachers affected it seems much more appropriate to me, at least in a situation when the schools intention is to produce content persons instead of money making machines. Also, 5 years are maybe a bit short-term, let it be 10 years instead.
Say,
1. how do you feel yourself in your profession now?
2. which courses you think were useful to your current professional life? Which teachers did what about it?
3. which courses you think were harmful to your current carrier? What was wrong in it?
Even thinking about these questions may be exceptionally useful - not for the school, but for you and now.
My ideal proposal for teacher compensation / evaluation at the college level:
Escrow portions of the tuition paid. The students then are charged with disbursing the escrow in equal amounts annually over a ten year time period. For students who wish to make no commentary on their professor(s) a predefined formula can be used. The student could choose to allocate to a professor, the department or just to the college. Incoming students and deans would be able to review what students thought of a particular professor based on how previous students allocated their escrow.
One could be cynical here and state that most presidents and deans may be more interested in raising funds than they are in effective teaching. Using questionnaires to judge "student satisfaction with teaching" produces satisfied students, which produces satisfied alumni, which produces bigger alumni donations.
Post a Comment