@d95d and others on sqrt(n):

I never really understood this logic. Suppose you have a journal that is supposed to be good like AER and that journal is assigned a value of 1 as a sign of relative quality. Lesser esteemed journals get a lower ranking value. So let us stick with an AER article with a ranking value of 1.

If the article is single-authored that author gets all the credit. So she/he gets a 1.

If there are two authors then an author-related 1/sqrt(n) means each auther gets credit 1/sqrt(2). Ok. What is the value of the paper in the ranking then? Well, it should be (1/sqrt(2) + 1/sqrt(2)) = 2/sqrt(2). But that is bigger than one. So 'mysteriously' the (aggreagte) worth of the paper is sqrt(2) > 1. But 1 was the ranking relative to all the other journals. In my understanding this inflates the original ranking of the journal in question and the 'worth' of the piece of research in that outlet.

As an aside: Suppose you have 6 authors, then the (aggregate) value of that one piece of research in the AER suddenly becomes sqrt(6) >> 1. And so on for n large.

That does not make sense to me. I am fully aware that there may be huge credit-allocative problem within teams, but an equal sharing scheme seems pretty much ok to many people. Of course, single authors do not have that problem. Turning this around: Single-authored papers get 'deflated' according to the sqrt(n)-rule.

Thanks for the input. I guess the question is what we want to achieve with the ranking. One view is that we want to estimate which impact the individual scientists has on science.

As impact is usual measured by citations but we have citations only many years in the future we use impact factors (average number of citations per paper) of journals. This is basically a prediction of future citations.

Now the challenge is to extract from this prediction of the impact per article, the individual impact of each author. Depending on your production function, e.g. if you assume it is complementary, the sum of marginal impact of each author is larger than the total impact of an article.

Ahmadpoor and Jones in PNAS (2019) have a fantastic paper on how to assign credit in teams. They show a) team work have ex-post much more ex-post impact than single authored papers (i.e. team work is efficient and should count more than the share of average impact) and b) the right way to weigh coauthor contribution is the harmonic mean.