Teaching and research: horse and carriage, or oil and water ?

Do research and teaching go together ? Do accomplished researchers generally also make (more) effective classroom instructors at the post-secondary level ? Do the research experience, and the first-hand, in-depth knowledge of a subject that one acquires in years, decades of investigation (e.g., in the laboratory, or at the computer) have a noticeable effect on one’s ability to convey general, basic concepts to students, especially in a classroom setting ?

Personally, the longer I am in this profession, the more I become convinced that the two should go hand hand. Why is that ?
Well, the way I would put it is the following: attempting to elucidate what is unknown, has the effect of broadening, reinforcing and consolidating one’s comprehension of what is known. And ultimately one’s ability to teach something, directly reflects one’s comprehension of what is being taught. To put it differently, the inability of many professors to explain things clearly, is often times attributable to the simple fact that they themselves have not fully understood them. That is why they fumble on the oddball question (the one not answered on the textbook), cannot think of examples or test problems on their own and constantly resort to data banks, they are unable to present things in a different way from the “canonical” one, and so on.
I cannot help feeling that a person who has carried out a research project in a specific area has explored aspects of a subject, and understood its connections with other areas, that are simply not illustrated (or emphasized) in textbooks. There is no question that the subject matters that I feel most comfortable explaining, are the ones in which I have engaged as a researcher. Furthermore, it seems plausible that being active in research should impart to one the ability to relate even the most elementary topics to cutting edge fields of research, in turn conceivably generating enthusiasm and interest among students.

Of course, the above is just my opinion, one possibly shared by a large fraction of my colleagues, but not one founded on actual data (there being, to my knowledge, no actual data upon which to found it).
In any case, just like in any human endeavour, no one should expect any rule to be “hard-and-fast”. We all have had in college, or even in graduate school, brilliant teachers, or at any rate instructors who we felt were very effective, who were not especially accomplished researchers. We all have heard it stated many times, “a great researcher does not always a great teacher make”. And there seems to be no reason why a person with a solid background in some general field of study, having gone through years of formal education, and embodying that elusive blend of above-average intellect, communicative skills, personal charm, charisma and what have you, may not succeed at explaining concepts — perhaps even more so than many others with superior investigative abilities.
But shouldn’t there be at least some connection between the two ? Isn’t the merging of, and interplay between research and teaching, one of the tenets of university education, not just in North America but to some degree virtually everywhere else in the world ?
Answering such a question in a way that may even be regarded as tentatively objective, never mind scientific, seems like a hopeless task, primarily and fundamentally because of the difficulty of arriving at a universally acceptable definition of what makes one a good “researcher”, or (even worse) a good “teacher”.

It is all about ‘em numbers…
There is no question, however, that these days people are eager to measure — no, to assign a numerical value to everything. University professors are evaluated on both their teaching and research effectiveness by means of procedures which ultimately are summed up in a single number. In the case of teaching effectiveness, that number (typically from 1 to 5) expresses the average student overall satisfaction with that teacher, and is the main outcome of student evaluation of instruction (SEI).
On the other hand, Hirsch’s h-index, also a number (in this case a non-negative integer) is rapidly becoming a popular tool to assess a researcher’s standing within the community, as captured by the person’s citation record.

Let me state at this point what this post is not about: I am not going to discuss here the merit, validity or reliability of either number. Make no mistake, I do have an opinion on that (how could I not), but that is not the point of this post. The fact is, both numbers are used for the above-stated purposes, and if they surely do not and ought not tell the whole story, they are nonetheless taken seriously as reliable broad indicators. Based on this premise, it appears therefore possible to make a first attempt at providing a quantitative, objective assessment of the relationship between teaching and research, by studying how these two numbers vary together, if at all.
This is what this post is all about: are researchers with a high h-index more likely to score better in SEI ? Is there any correlation between the two ? I have attempted to establish just that by carrying out a statistical analysis of the record of a sample of one hundred university professors. The rest of this post describes the methodology, results and conclusions of this (amateur) study. I am obviously very interested in any similar study conducted by “professionals”.

Methodology
One of the interesting things about these two numbers, is that they are both effectively publicly available. The h-index of a researcher can be obtained through WebOfScience.
Obtaining a professor’s SEI average is in principle not as straightforward, as many (most) universities do not publish that information (nowhere easily accessible anyway). However, there exists a web site called RateMyProfessor (RMP) where students can freely and anonymously rate their instructors. Even though I am not sure how controlled, careful is the way in which such opinions are collected [0], I think it is safe to expect that, for professors receiving a reasonably large number of ratings, the average ought approach that of SEI as formally administered at the person’s institution — that is surely my personal observation, and I am aware of at least one study that claims to have reached that conclusion.

Sample
So, I went online, and armed with all my patience I collected data for one hundred university physics professors based in the US and Canada. I chose physics as a discipline because it is my own, and that made it easier for me to resolve some ambiguities with a proper assignment of the h-index in some cases (see below). I see no obvious reason why physics should be different from any other scholarly discipline, in that respect. I selected professors to be included in the sample by picking the university first [1], going through the names of instructors in the physics department, as listed on the RMP web site, and considering each and every one who received at least twenty ratings, on the assumption that that would be sufficiently large a number that the average should be fairly robust.
I included in my sample individuals who are currently listed as members of the faculty (assistant, associate or full professors) in the web site of the physics department of their institution, i.e., I did not include lecturers, retired or adjunct faculty. For each one of these individuals I recorded the “overall quality” average (a number between 1 and 5), as well as the h-index, as retrieved off WebOfScience [2]. I stopped as soon as the size of my sample reached one hundred. In the end, it includes names from twenty-three different universities [3].
It seems a reasonably large and unbiased sample to me, but I am no statistician and therefore welcome any criticism.

Results
The sample averages are:
RMP student evaluation “overall quality” measure (RMPOQ): 3.2 with a standard deviation of 0.8.
Again, the RMPOQ measure is a number between 1 and 5, and the fact that its sample average falls so close to 3 seems to support the notion that the sample is large and unbiased. The sample median is 3.15.
h-index: 30 with a standard deviation of 15. Values of h-index are scattered fairly widely, the lowest being 4, the highest 66. I have also noticed for the first time something that others had mentioned to me before, namely that h-index take on different values, on average, in different sub-fields of physics. However, it is not clear to me that that should affect its correlation with teaching. The sample median is 29.6, very close to the average in this case too.

The above scatter plot shows all the data points in the sample, and basically tells the whole story, as far as this particular exercise goes. Let us go through it in detail. Each red dot represents a professor in the sample, with his/her value of RMPOQ (horizontal axis) and h-index (vertical axis). The two lines crossing in the central part of the plot represent the two average values for RMPOQ and h-index. These two lines divide the plot into four quadrants.
If the two variables considered here (RMPOQ and h-index) were somehow “correlated”, then one would expect red dots to gather in one or two of the four quadrants. If, for example, a professor with a higher-than-average h-index were also more likely to have a higher-than-average RMPOQ, then dots would cluster preferentially in the upper right quadrant, leaving the upper left one “emptier”.
The correlation could be even stronger, namely professor with a lower-than-average h-index could also be more likely to have a lower-than-average RMPOQ, in which case the upper right and bottom left quadrants would feature most of the dots. Alternatively, if the two variables were “anti-correlated”, namely if a higher-than-average value of one meant a likelier lower-than-average value of the other (and/or vice versa), then dots would fall more often in the upper left and/or bottom right quadrants.
On the other hand, if the density of dots in the four quadrants is very nearly the same, one ought conclude that the two variables are essentially disconnected from one another, that knowing the value of one gives no clue as to the value of the other.
This, in turn, carries the strong implication that, whatever it is that one of the two variables measures, has little or no relevance to whatever it is that the other one is believed to assess.

Now, it must be stated clearly that, when it comes to social phenomena, correlations are usually weak, i.e., one is often in the situation in which the density of dots in the four quadrants is only slightly different. The data in this sample, however, seem to indicate an exceedingly weak correlation. If we try and count how many red dots there are in every quadrant, we find very nearly a quarter thereof (i.e., approximately 25) in each one of them.
In other words, RMPOQ and h-index are ostensibly unrelated. Assuming that this is not due to some artifact, i.e., some hidden bias in my methodology, or other errors on my part (which is fine, this is just a blog and I am merely an amateur), one may wish to try and understand why the correlation is so weak [4].

Discussion
Obviously, the most direct conclusion is that the two variables are indeed independent, i.e., there is no correlation between one and the other [5]. This is not something to take lightly, however, it seems to me. If indeed the two quantities considered here furnish a cogent, relevant assessment of what they purport to measure, namely teaching and research effectiveness, then the conclusion would be inescapable that a good, experienced researcher is no more likely than a mediocre or inexperienced one to be an effective instructor. It would also mean that, if excellence in both research and teaching were a requirement for promotion and tenure in academia, one ought expect that, in general, only about a quarter of all professors would pass the test…

Now, I have often heard colleagues and people outside academia alike, lament the fact that research is an impediment to quality teaching, as active researchers are not interested in teaching, largely see it as a hinderance to research, try to “get done and over with it”, often doing a sloppy job. Well, the above study does not support such a contention either, in that the h-index and the RMPOQ do not appear to anti-correlate any more significantly than they correlate. In other words, there is a 50/50 chance that either a good/bad researcher will be a good/bad teacher. Perhaps the two activities hinge on different, possibly even antithetical qualities, abilities, character traits.

Anyway, if being a good researcher were to be ultimately established, or came to be accepted as having no real bearing on one’s teaching performance, given the central role played by teaching in the mission of any institution of higher education (including research universities), this would constitute a powerful argument to swell the university ranks of teaching scholars, lecturers, or in any case academics whose main charge is teaching, not research.
Presumably, such an increase would come at the expense of the institutional investment in research, chiefly research personnel (not only faculty but conceivably also postdoctoral associates, graduate students, research support staff, and in general anyone whose job description does not contain a strong teaching component).
One may even imagine separating the research and teaching careers, as some university systems in fact do. The far-reaching consequences that this would have on the research effort of a nation can hardly be
underestimated.

Conclusion
Are we really ready to concede the above, though ? That teaching and research have nothing to do with one another ? That research ability may perhaps have an impact on graduate education, which has a strong research “training” component, but hardly makes a difference in the classroom ? Is the American university system, one so heavily based on research, in fact almost built on the premise of the marriage between research and teaching, not the most successful in the world, by any accepted measure ? Is it short-changing its own students by putting in front of them a class of instructors selected mainly based on research potential, who may or not have what it takes to deliver in the classroom ?
There is, of course, an alternative interpretation of the above data, namely, either one of the two quantities considered here, or possibly both, do not really measure what they aim at (or, claim to be) measuring. But that is for another post.

Notes
[0] For example, it is not at all clear to me how (or even if) the site prevents a single person from submitting several times a rating for a given professor, or whether it is possible for someone to make up phoney information and rate an instructor by whom the rater has not really been taught.

[1] I just picked names of institutions as I thought of them, in no particular order. I mostly tried to stick with research universities, but I also included one that is more oriented toward teaching. For quite a few physics departments (typically those top rated) I could not find a single instructor, at the RMP web site, with my minimum required number of ratings. My non-scientific observation is that students at large state universities, especially those not prominently ranked, are more eager to rate their professors than students at Ivy League schools.

[2] That is not always straightforward or unambiguous. In some cases, a person’s name is so common that it generates a lot of entries (i.e., published articles), and the output of the “Create citation report” function is not always reliable. Even though criteria such as the field of research or the institution can be used in principle to identify only the entries exclusively attributable to the person in question, in practice sometimes I could not resolve the ambiguity to my satisfaction, and therefore did not include that name in my sample.

[3] I am obviously happy to share the data that I have collected with anyone interested in checking them out. However, I have no experience with research conducted on human subjects, and am thus unfamiliar with any legal restriction on disseminating data such as these, wherein people are referenced by first and last name. I must emphasize, however, that I have not used anything that is not publicly available. RMP data are, of course, visible to anyone with an internet connection, whereas h-index data require a subscription to WebOfScience. I believe, however, that all one is talking about is a trip to a reasonable university library.

[4] The computed sample value of Pearson’s correlation coefficient for the two variables is 0.05

[5] One might argue that the h-index of a professor in the sample considered here can be lower than the sample average for two reasons: one is a lower than average research effectiveness (as assessed by citations), the other is the younger age of the individual, who may be very talented in research but simply may not have had the time to establish a rich citation record. Thus, the h-index in my sample is not necessarily a measure of research talent and ability, as much as a combination of that and research experience.
Even so, however, one would expect a more experienced researcher to be a more effective teacher, it seems to me.

Tags: , , , ,

17 Responses to “Teaching and research: horse and carriage, or oil and water ?”

  1. Steven Says:

    Like you brought up in footnote 5, age seems to be a factor. Assuming that RMP and h-index are good measures, you would still somehow need to normalize the h-index for research experience. People can be naturals in the classroom and get good reviews right away, but having a high h-index requires that you are good at research, and have been doing it long enough to get significant amounts of citations. The other option would be to break your sample into groups of the approximate career stage.

    • Massimo Says:

      Well, sure but, still, would you not agree that research experience matured over the course of more years should have some impact on classroom teaching ?

  2. mareserinitatis Says:

    Were you bored over the holidays? :-)

    In my experience, a really good teacher is one who is able to remember what he or she used to not know and be able to communicate to someone at that level. That is where I think the ‘researchers make poor teachers’ assumption comes from: someone who is too deep into their research may have more difficulty stepping back and examining the subject from the point of view accessible to a novice. I’m not saying it can’t be done, but I think it takes a serious effort to put one’s self in that position. It’s not something that happens naturally. On the other hand, I would think repeated exposure to students would help one hone that skill.

    Regardless, I think the biggest component is personality: someone who really wants to be a good teacher and is excited about his or her subject will naturally be a better teacher than someone who is uninterested.

  3. prodigal academic Says:

    Interesting post. I find that my RMP comments and scores track reasonably well with my student evaluation scores. In both cases, it seems to me that most of the responders are either very happy or very unhappy with the course.

    I would have thought that there was a stronger correlation between research ability and teaching ability (and maybe there is, if we had better numerical measurements for both. I agree with mareserinitatis that teaching ability tracks mostly with interest in doing a good job at teaching. Most people who can get a TT position can give a good talk (or they wouldn’t pass the interview stage). Putting that talent to work in a classroom is somewhat of a motivation thing.

  4. Nathan Says:

    I agree with everything mareserinitatis wrote. It’s mostly about the instructor’s attitude toward teaching.

  5. Schlupp Says:

    Nice work.

    Some remarks:
    1) As you mentioned, you didn’t control for a number of variables, so it might be that a subtle effect in either direction just doesn’t show up in the noise introduced by a mixture of fields, age cohorts and types of universities.

    2) Suppose we live in a simplified world, where tenure decisions are completely rational and – this is the bigger caveat [1] – are the determining factor in deciding how long any given professor stays in academia. Let’s also suppose that ability in teaching and research are completely UNcorrelated. For simplicity, let’s further assume that we can divide people into 50 % “good” and “bad” for each quality. In this case, your sample will tend to contain fewer of the 25% who are bad at both teaching and research, because they’d only get to stay for a few years, but would be kicked out at the tenure decision. Consequently, if teaching and research are uncorrelated, you might expect to measure a slight ANTIcorrelation [2].

    3) I do happen to think that the value of having a research active instructor is not among the things that SEI is particularly good at capturing, I am curious about your post on that matter.

    4) I think the argument for cutting down research activity is not so much getting better teaching, but simply getting more of it.

    [1] I remember your post on how most faculty attrition is NOT due to tenure denial, so the effect may not be noticeable after all.
    [2] If only the best quarter is kept, i.e., those who are good at both, you’d get a correlation. But given that most people do get tenure, the scenario where only the lowest quarter is weeded out seems somewhat more plausible.

    • Massimo Says:

      fields, age cohorts and types of universities

      I can send you the data if you wish. I do not think that any of those are relevant but, it is obviously possible. And because we do not live in a perfect world (way more than 25% get tenure), I doubt if you would ever see the effect you are referring to in 2).

      the value of having a research active instructor is not among the things that SEI is particularly good at capturing

      Oh, I am much more radical on this, I just happen to think that SEI (which are necessary for other reasons), are not a reliable measure of teaching effectiveness, defined as “students learn more with that teacher”.

      not so much getting better teaching, but simply getting more of it.

      Yeah but it’s a logical conclusion. You want more teaching and you need not look for qualified researchers because they won’t necessarily make better instructors — hence, you might as well hire less expensive, part-time lecturers. And it’s already being done, and I think it will be done much more if administrators start peddling the idea that research and teaching do not necessarily go together.

      • Schlupp Says:

        I know that tenure rates are higher than 25 %, this is why I wrote that 75% is more realistic than 25%.

        But I was mistaken there anyway: The argument in 2 is independent on tenure rates.

  6. GMP Says:

    I agree with mareserinitatis and prodigal. I think good teaching (as perceived by students) has to do with charisma, personability, and the interest to put in the time into teaching (providing of course that the instructor has the core competence) rather than research excellence. Experience and the ease in front of the class that comes with it are also extremely valuable, likely more so than technical brilliance or being at the forefront of the field (I assume we are all talking mostly about teaching undergrads).

    I get pretty high teaching evaluations but some people get even higher; those people are way nicer than I am (less grumpy, more warm to students and humans in general) and put in more time into interactive modules and alternative teaching techniques and whatnot. I try to do a solid job in the classroom and outside of it, but there is a level of interactivity and face time with students beyond which I won’t go because there are only so many hours in the day: my priority is research. Teaching at an R1 is a fairly thankless (although sometimes gratifying) activity where above a certain level you have to put in a ton of time to get a tiny bit more out.

    • Massimo Says:

      I agree with mareserinitatis and prodigal. I think good teaching (as perceived by students) has to do with charisma, personability, and the interest to put in the time into teaching (providing of course that the instructor has the core competence) rather than research excellence.

      Well, OK, so do research experience and ability bring nothing to teaching ? Because, I think you will agree with me that “competence” does not imply research experience.

      As for “the interest to put in the time into teaching” — well, I am sorry but I really think that that is nonsense.
      And yes, this hits a nerve with me. Here is the thing: I put a horrendous amount of time into teaching. I like teaching. I spend hours on my lecture plan and notes. I am eager to do well, and it bothers me when students do not seem to appreciate my work. My SEI averages, while not bad, are consistently below those of some people who, by their own admission, put way less time than me into teaching. And I do not think that I have a terrible personality — based on the comments students seem to like me as a person, in general. They also do not find me incompetent (again if we go by comments). OK, evidently some dislike my teaching style, which is fine, but never did a single one of them write a word of appreciation for the effort and the time that I ostensibly put into the course, which I honestly think is way over the norm, at least in terms of assisting students outside class.

      I was having this very conversation with one of my colleagues a few days ago, one of the most popular among students SEI-wise, who shook his head and chuckled when I told him that during this term I offered extended office hours, replied to every single e-mail from students asking questions (412 enrolled in a single course) no later than an hour after receiving it (and typically a few minutes, essentially at any time of the day), that I came to school a few weekends to meet with some who needed extra help. My research time this term has been non-existent. He told me flat out “You are a fool. Do you think students appreciate a professor’s availability ? They don’t !”.
      He is right. It has been consistently my observation too, they could not care less. And it’s easy to see, those who email and ask for help, or come and see me in my office, are a vanishingly small fraction of the population. The vast majority will never talk to me.
      I insist on this point: The notion that research comes at the expense of teaching is an untrue, unfair, pernicious cliche. And, it is not supported by my data either. There is no detectable anti-correlation.
      No, it is not about how much time or interest or enthusiasm one is putting into this.

      • GMP Says:

        Massimo, my experience actually mimics yours quite closely. In my first few years on tenure track I put a huge amount of effort into teaching and was available whenever the students needed me (plenty of extra office hours, answering emails in the middle of the night and weekends). I now really think that many students do indeed feel that a professor who bends over backwards to accommodate them is insecure or as your students said “a fool”. My evals actually went up when my availabilty dropped. I think students do “fall” for a certain level of aloofness; I know people who are masters of balancing exactly how warm they need to be with how unavailable they need to be and the students LOVE them. I have spent a lot of time thinking why these most beloved teachers are that, and I have come to a conclusion that they have the same quality as some of the most charismatic policians (Obama or Clinton) — not sure what it is, but it is some type of people skill. Most of the stellar teachers in my department are decidely not the best scientists (i.e. they are not the most stellar of scientists) and vice versa.

        However, I agree with you that the breadth of knowledge in the field will likely inform one’s teaching and make them a better instructor overall, more likely to draw examples from disparate subfields and having a lot of experience to draw from and share excitement. Unfortunately, and quite cynically I am sure, I am not sure that the objective quality of teaching is the most important factor on the students radar/at evaluation time (even though they will likely be grateful for it later).
        But, as a few others have said above, I think to support this hypothesis (if quality => high RMP rating and high h index=> high teaching quality, then high h index => high RMP rating) with data you probably have to look at a slightly senior cohort, say those with 15+ or 20+ years since their PhD (if you are using Web of Science, 20+ years since the first publication).
        [My biggest problem is, as I am sure is clear, is that I am skeptical/disillusioned about how strong the correlation betweeen teaching quality and student satistaction is.]

      • mareserinitatis Says:

        No, it is not about how much time or interest or enthusiasm one is putting into this.

        But time != interest. You can be interested in the students and the material and teaching in general, but that doesn’t mean you should sink a bunch of time into it. I was saying that someone who likes teaching and is able to demonstrate that interest in front of a classroom will naturally fare better than someone for whom teaching is a chore. That’s a given.

        I also said personality has something to do with it. I have observed that some people just seem to ‘click’ with the students, and their way of communicating with students puts them at ease. One can learn through observation and experience how to do this (and like any other skill, it takes time to develop), but some people do it naturally, and I think that gives them a slight advantage. Some of it is also relative. I think the further you get away from the sciences, math, and engineering, the more professors have that natural ability to interact in a way that students like. Students are always going to make comparisons, so the real question is how are you interacting with the students when compared to the people in your area who are very successful in teaching.

        Students are never going to see the time you put into things outside of class, so they will never appreciate that aspect of teaching, and it’s a bit unrealistic to expect them to. What they see in front of the classroom and how you interact with them is going to be 90% of their judgement criteria. So if you’re going to spend time trying to improve students assessment of yourself, a lot of it has to be how you present yourself and the material in the class.

      • Massimo Says:

        Students are never going to see the time you put into things outside of class, so they will never appreciate that aspect of teaching, and it’s a bit unrealistic to expect them to.

        Why ? You mean it makes no difference to them if you reply to their email right away or after two days or not at all ? If you tell them “sure, I can meet you over the weekend to go again over that subject” or you tell them “eh, let’s do that after the midterm” ? Should I stop doing all of that ?
        I can certainly start telling them “sorry I am busy now, come back next week”, believe me, I have plenty of other things to do but… I still think in this case I should just ignore student evaluations and keep doing what is right.

      • mareserinitatis Says:

        Should you stop doing it? It seems to me that you don’t think it’s worthwhile from the perspective of the students and their evaluations. Whether it is ‘right’ or not was not part of the prior discussion. As you said, the students don’t appreciate it so spending time on those sorts of things is not going to benefit all but a few students’ perceptions.

        I guess what I’m saying is that it’s easy to slide downhill very easily, by not doing the things you mention. Going the other direction seems to require a large amount of work for small changes, and I think you get the most bang for your buck by spending that time working on the skills used in front of the most number of students.

        When I was teaching geology labs, I tried experimenting with how much to present to the classes and how much to interact with them. How much I presented to the class made very little difference. However, when I sat back and watched them work and let them come to me versus when I went and asked them if they had questions resulted in a huge change in evals. Simply by sitting near the front of the room, many students jumped to the conclusion that I didn’t like teaching and my evals took a huge dive that semester. This is also despite the fact that I spent a lot more time outside of class helping students than any other semester. All I did was make one change – how I approached students during class time – and that caused a much more dramatic change in evals than anything else I did. The students key into personality a LOT.

      • Massimo Says:

        Should you stop doing it? It seems to me that you don’t think it’s worthwhile from the perspective of the students and their evaluations.

        No, it is the other way around. It’s the evaluations that are not worthwhile.
        I should not and will not stop trying my best to help students, regardless of whether student evaluations reward me for doing it or not. To me, it is just another example of how dubious the feedback offered by students on teaching is, and how often times evaluations should just be ignored.

      • prodigal academic Says:

        I think that some people do the bare minimum on teaching, and that is what I mean on not having the motivation to teach. I agree with you that good research ability should improve teaching, even at the intro level. I don’t think that SEIs measure teaching ability (and didn’t think so even as a student).
        In my experience this year, when I went that extra mile, students don’t appreciate you going that extra mile for them because they EXPECT it. They expect their professors to answer emails at all hours, so they don’t see it as extra effort. The fact that very few professors actually do this doesn’t seem to burst that expectation bubble. Because this is an expectation, they don’t consider it when they write their evaluations.
        Next year, I will continue to answer all emails within 12-24 hours, I will have set office hours, and I will have open office hours on the day of my exam (a big hit BTW), but I will not allow students to make so many appointments, I will not be available after every class, and I will not allow ANY drop-ins for any reason. These things killed me this semester without any appreciable impact on student learning OR my evaluations.
        I think students don’t know what makes a professor good, and they can’t evaluate whether the course was well taught during the course. Evaluations are absolutely a personality thing–wasn’t there a study showing that evaluations done after a few minutes were not significantly different from those done at the end of the the semester (can’t find link)? If so, that suggests that students are not evaluating the teaching at all.
        And that doesn’t even go into the studies showing that women, visible minorities, and people with accents of any kind get lower scores on average across the board.

      • Massimo Says:

        Because this is an expectation, they don’t consider it when they write their evaluations.

        I agree, it seems to be that way. Makes no sense, because they thank me profusely when they find me available on the spot or when I reply to their e-mail… maybe these are the ones who do not care about filling out evaluations.

        I will have open office hours on the day of my exam (a big hit BTW)

        Did it too. Two people showed up.

        I will not allow students to make so many appointments, I will not be available after every class, and I will not allow ANY drop-ins for any reason. These things killed me this semester without any appreciable impact on student learning OR my evaluations.

        See, that is where I have a problem. I think I need to do all these things anyway, regardless of evaluations. I don’t want to tailor my teaching to evaluations, especially if there is a pretty good chance that evaluations do not get this part right (and I agree with you, they don’t).

        wasn’t there a study showing that evaluations done after a few minutes were not significantly different from those done at the end of the the semester ?

        Oh there are plenty similar studies. Perhaps the most famous is this one.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 28 other followers

%d bloggers like this: