Paragraph about LaTeX editor
What implications do these observations have for the assessment of the models described above? A natural response is to stress the role of the models as ideals. What matters, it may be said, is not whether the explanations we are actually able to produce conform in all respects to the models, but rather the normative role of the models — they provide a standard (of full, complete explanation) against which candidate actual explanations can be assessed and thus mathlab suggest how such explanations might be improved. The problems with this response are several-fold. First, at least in some cases, it is far from clear in what sense the models really describe ideals of explanation. For example, it is far from clear that an explanation of the behavior of a market that fully satisfied the requirements of the CM model would be “better” than the explanation provided by standard economic theory or that the “best” statistical explanation of Jones' delinquency is one that fully satisfies the objective homogeneity requirement (assuming that this would be a quantum mechanical explanation). Second, to the extent that ideal explanations are epistemically LaTeX editor unavailable, it is not obvious how they can provide a standard or yardstick against which non-ideal explanations are to be measured. At the very least, we require some additional account of what it means for an explanation to be more or less “close” to an ideal (what the “metric” of closeness is) and how closeness is to be assessed in contexts in which we don't know in detail what the ideal looks like. For example, if an ideal statistical explanation of Jones' delinquency must satisfy the objective homogeneity requirement, how exactly do we go about comparing candidate non-ideal explanations which fail to satisfy this requirement with respect to the extent that they are more or less “close” to satisfying objective homogeneity?