The purpose of this discourse is elaboration of the least-squares formalism for averaging any number of correlated data. A procedure is considered, in a rather general formulation, which serves to determine m parameters by n correlated experimental data on possibly different functions of the parameters. If m > n, the very same procedure applies to the adjustment of a given library of m correlated parameters by a set of n relevant correlated data which might even be correlated to the given parameters. However, casual or indifferent application of the procedure might yield erroneous or even meaningless results. A very simple, and relevant, example is the combination of averages over partially overlapping sets of data. The combination yields a result which is different from the true average of the data, i.e. the average over the union of partial data sets. Further, when the measured quantities are nonlinear functions of the parameters, special care should be taken in order not to exceed the range of validity of the conventional, i.e. linearized, algorithm. One should also consider the mutual consistency of the given data, for even few outliers, which in a limited data set are highly improbable, might severely bias the resulting estimates of the unknown parameters. We discuss several observations and offer some comments on the applicability of the least-squares combination of correlated data.