Viewed over 100,000 times!
A clear description of how to calculate Sum of Squares (S).
Here’s What You Need To Know
Sum of Squares (SS for short) is part of a trio. SS, variance and standard deviation all measure dispersion. When all the scores are the same, each is zero. When there is some variation in scores, all three increase. When there is a lot of variation, such as when everyone has a different score, all three are large. More diversity, more dispersion, higher SS, variance and standard deviation.
Theoretically, SS is the sum of deviations from the mean squared. Conceptually this is fine. The mean is subtracted from each score, squared and the added up. No one calculates SS this way because it is tedious, and the rounding errors get compounded evert step along the way.
The proper way to calculate SS is with a computer. Failing that, use a formula.
Variance the second member of the dispersion trio. It is the average of the squared deviations. Variance of a population SS divided by the number of scores (N , for short). Variance of a sample is SS divided by n-1b(that is, the number of scores minus one).
The more different the scores of a distribution are, the larger the variance. The more homogenous the scores, the smaller the dispersion. If all the scores are identical, variance equals zero. If you compare two distributions, the one with the smallest variance has the least dispersion.
The nice thing about variance is that the concept is in the name. Variance measures variance from the mean. It is the average amount of squared variation.
Standard deviation is the third part of the dispersion trio. And it is interpreted the same way. Large standard deviation (sometimes called Steve. Or s) means scores differ. Small Steve. Means less diversity. Zero means everyone has the same score.
For more on the topic, check out Statistics.