The number of independent citations to their own published work that scientists garner in the course of their careers, is widely regarded as a cogent indicator of the impact of their research activity. The seemingly reasonable assumption is made that impact ought to correlate with overall research effectiveness and quality of the work itself (though, obviously, it is an imperfect criterion).
So much importance is attributed to citations, that some go as far as proposing that a scientist’s whole body of work, productivity, impact on his/her field, be assessed through an analysis of that person’s citation record (an example is offered by the controversial h-index).
Regardless of how much value one attributes to citations, an immediate, obvious issue arises when counting the number of times one’s work has been referenced, namely:
Should citations that individuals make to their own published articles, be counted as any other citation, or should they be given less weight, or even excluded from the count altogether ?
To my knowledge, there is no consensus on this issue. The ISI Web of Knowledge search engine counts by default all citations that an author has received, but one is given the option of excluding from the count all self-citations (in fact, any citation that can be construed as such ).
A popular (if perhaps not prevalent) opinion, is that self-citations ought not be counted, and at first thought the motivation would appear obvious. Authors could easily inflate their own citation record, by simply citing as many as possible of their own previous articles in any subsequently published piece of work. Indeed, some have argued that the h-index itself should be corrected for self citations .
To me, this is very reminiscent of a similar debate, having to do with the number of authors of an articles. Should a singly-authored paper “count more” than one co-authored with others, for the purpose of assessing a scientist’s portfolio ?
While instinctively most of us might think “yes“, in practice any attempt to devise a scheme aimed at assigning different “weights” to co-authored articles, almost inevitably ends up being unfair to some, and ultimately does more harm than good. For example, how would one compare a poorly cited, scarcely read singly-authored article, to one co-authored by several scientists, which is widely read and cited in a given community ? Why penalize collaborative, high quality work, especially in those fields of science where collaboration is almost a necessity these days, given the scope and complexity of the research to be carried out ? And how should the number of authors be taken into account ? How many is “too many” ? If we are going down that path, then we really need to know who did what…
I think that most of us are quite comfortable working with raw numbers of (co-)authored articles, with the full understanding that it is merely a starting point. For evaluation or hiring purposes, for example, obviously a more in-depth examination of one’s publication record is in order, and yes, there are cases where one might appropriately raise eyebrows over a publication list featuring only, or prevalently, multiply authored articles, especially in a field where it is possible for a sole investigator to make meaningful, original contributions (e.g., in theoretical physics).
The same applies to self-citations, in my opinion.
I say, a citation is a citation is a citation, and there is really no compelling reason, much less a clear-cut, fair way of removing self-citations from the count.
Just because it is a self-citation, it does not mean that it is surely illegitimate, self-serving or fraudulent. Quite to the contrary, most citations to one’s own work are no less appropriate or warranted than those to someone else’s work (including those to articles authored by the anonymous referee, who sets them as a condition for publication). What would be the rationale for not counting, or regarding as lesser, citations to one’s seminal work, if a number of projects span off, were carried out by the same investigator (or his/her students) and articles were published, all citing the original paper  ? Truth is, prior publications by anyone, that are relevant and/or constitute part of the foundation of the research work described in the manuscript should be cited, simple as that. It is only normal that one’s previous work form the basis for further developments, and if it does, why not acknowledge that ? There is a difference, I think, between a project that evolves into something greater, and in time generates intriguing, original questions for others to answer (including one’s former students and postdocs), and one that does not.
Of course, in order to increase the number of “hits”, a scientist could be tempted to break down work that really should be published as a single manuscript, into many smaller articles, all published separately, each one citing all the others. Look, let us not be naive, I know some who do that — but they are very few, and from that to conclude that all self-citations must fall into such a scheme is nonsensical.
And really, by how much can one boost one’s h-index by means of self-citations ? Here too, I am sure that some cases exist of scientists skillfully using self-citations to bring their own index up but, aside from the fact that in principle we all could do it, and therefore no one is at a comparative advantage, it seems like an awfully hard way to accomplish that goal .
There are scientists who run large operations, with a lot of postdocs and/or graduate students working on different but related projects, in turn publishing a lot of articles making reference to one another. I suppose that in those cases, self-citations could make a non-trivial dent to their h-index, but in these cases this would occur largely as a result of the scientist’s overall productivity. One ought not forget that productivity, while surely not being the most important aspect, is valued by most of us, and therefore it does not seem unreasonable that a measure like the h-index, which aims at being all-encompassing, reflect it to a degree.
Once again, I think that this entire discussion originates from a fundamental misunderstanding:
The problem is not with indices and how they are defined, it is with what one does with them, i.e., how far one is willing to take the reliability of any numerical measure, when it comes to evaluating something of the complexity of one’s career achievements, in science or any other profession.
 It is worth clarifying what is meant by this. Say scientists A and B co-author a paper, and say B cites it N times, on as many successive papers of which A is not a co-author. ISI Web of Knowledge will regard these as N self-citations, for both authors A and B.
 Especially if doctoral theses were written based upon these “child” projects.
 Consider, for instance, a scientist whose h-index is currently 15, wanting to bring it up to 20 by means of self-citations. Let us assume that said scientist has already published 15 articles which have been cited 20 times each already, and five articles whose number of citations is 10. Each one of these articles need be cited ten more times in order for the person’s h-index to be boosted by 5. That means writing ten independent articles, many of which presumably on the same subject, for the main (or sole) purpose of citing those five… seriously, how many people do you know who do “science” like that ? In my experience, anyone indulging in such behaviour is easily spotted, ends up eliciting suspicions, and for the most part is is dismissed and/or subjected to ridicule by peers.