Why do optimization textbooks, as well as appendices for MWG JehleReny etc. simply state conditions in terms of continuous differentiability, even when the weaker condition of differentiability is sufficient?
I know practically continuous differentiability is easier to check, but why sacrifice the rigor?
Differentiability vs Continuous differentiability


Why state results using differentiability at all when you can use subgradients?
It's a textbook trying to give intuition to students, not serve a compendium of the most general results. It's like getting mad at algebra book that states results about groups when it holds more generally for monoids or some bs.

that's a terrible example to give. stating the minimal conditions for an optimization routine to work is pretty crucial.
Why state results using differentiability at all when you can use subgradients?
It's a textbook trying to give intuition to students, not serve a compendium of the most general results. It's like getting mad at algebra book that states results about groups when it holds more generally for monoids or some bs. 
The theorems are much easier to prove under the stronger continuity assumptions and (generally speaking) economic models are wellenough behaved that the stronger assumptions hold. Since we're doing economics and not math, it's usually not worth the time to labor over the finer points of functional analysis.
There are math textbooks which do elaborate on these details. It's difficult to make a good recommendation without knowing your background, and specifically what you're trying to better understand.

OP is confusing "sacrificing rigor" with theorems that have weaker assumptions and hence, necessarily, weaker results. Generally, there is this tradeoff.
In that sense, the example you're dismissing is actually perfectly good, and you're also confused in thinking that "minimal conditions" imply equally strong results.
Retards.
that's a terrible example to give. stating the minimal conditions for an optimization routine to work is pretty crucial.
Why state results using differentiability at all when you can use subgradients?
It's a textbook trying to give intuition to students, not serve a compendium of the most general results. It's like getting mad at algebra book that states results about groups when it holds more generally for monoids or some bs.

that's a terrible example to give. stating the minimal conditions for an optimization routine to work is pretty crucial.
Why state results using differentiability at all when you can use subgradients?
It's a textbook trying to give intuition to students, not serve a compendium of the most general results. It's like getting mad at algebra book that states results about groups when it holds more generally for monoids or some bs.
I agree in general for mathematical rigor. But for the problems economists usually work with, discerning between differentiability and c. differentiability does not add much; but even if during your research you arrive at point where you have to think about it, it probably means you are able to figure it out yourself and you don't have to look to MWG like a grad student.

I many results in economic theory are cleaner to state under a C1 assumption than in the most general case where "the same" result goes through. As a very simple example, think of the amount a firm will produce of a good with a given market price p and a strictly convex cost function c:(0,infinity) > reals that satisfies Inada conditions.
In general, there is a meaningful notion of "the place the firstorder condition is satisfied": it's the unique q at which p is in the subgradient of p. But if c is C1 (and it's not enough for c to be differentiable), I can say this much more cleanly as "the unique q at which c'(q)=p". It depends on the broad topic/level of the textbook, but I'd rather have the latter in most textbooks.

I don't think your example is very good because the objective function in optimization problems in the real world are often not that smooth. So in general it is actually very good practie to introduce as much generality as possible in optimization course material. This certainly doesn't work in abstract algebra which is already presenting very general structures (like groups and fields). Contininuously differential functions is often a rather prohibitive condition in many contexts.
OP is confusing "sacrificing rigor" with theorems that have weaker assumptions and hence, necessarily, weaker results. Generally, there is this tradeoff.
In that sense, the example you're dismissing is actually perfectly good, and you're also confused in thinking that "minimal conditions" imply equally strong results.
Retards.that's a terrible example to give. stating the minimal conditions for an optimization routine to work is pretty crucial.
Why state results using differentiability at all when you can use subgradients?
It's a textbook trying to give intuition to students, not serve a compendium of the most general results. It's like getting mad at algebra book that states results about groups when it holds more generally for monoids or some bs.