Refereeing comes down to two determinations:

1) Are the claims made in the paper correct?

2) Are the claims made in the paper interesting and novel?

The first issue is objective (are the proofs correct? are the empirical or model results correct? are all methods applied appropriately?). The second is completely subjective. Is it "enough" of a step forward from previous work? Is the question it answers "important"? Who is to say?

This is a fundamental issue. As previous posters said, ultimately both issues are to be sorted out by editors. However, referees do play an important part in the second point. Perhaps its not obvious why something is a big deal if you don't work in the exact literature. Referees should make arguments for why it is or is not.You're describing an utopian system of refereeing that does not exist in econ.

1) For theory work, nobody reads proofs. In fact, it's shocking that there are numerous published theory papers with many citations that have glaring issues in their maths. Indeed, papers that cite the original work continue to use their flawed proofs to derive results. Why? Because nobody checks. If the original paper could use 1 + 1 = -27 to get published in a top-5, then it's natural to follow their paper writing strategy and accentuate their "proof method". For empirical work, when's the last time anybody bothered to replicate the tables in the published papers even with publicly available data? And let's not even get started with the proprietary / secret data circus show.

2) "Interesting and novel" --- there's always some element of subjective judgement in any science, and I'm fine with that. What I'm not fine with is the differential treatment between a paper written by a big shot versus a paper written by a nobody. If a nobody proposes idea X, then the referee could say that it is "obvious", "uninteresting", or just something as outrageous as "the paper is poorly written". Yet if a big shot proposes the exact same idea X, the referees could hail it as "groundbreaking", "innovative" or "very well written and executed".

I am an empiricist, but as a grad student I was asked to review a paper written by a prominent theorist in my field. I felt honored, so I took about 3 days to work through literally everything in the paper. There was a part of the paper where the result jumped from step A to step F. After butting my head against the wall for a solid day, I concluded that the only way to get from A to F was to make a bunch of really strong assumptions that were nowhere to be found in the paper. I assumed I was wrong, but noted in the report that I thought going from A to F required making a certain set of assumptions. The authors just indicated that I was right in their response, and the paper was published. That was precisely when I figures out that theory is as much if not more BS than empirical work.