Standard deviation and variance are statistical measures of dispersion used to assess how far away individual data points are from the mean, or average, within a set of data. Knowing how far each data point lies from the average allows you to make more accurate conclusions about a data set, population or sample, because you can pick out outliers and extreme values and note how they affect your results.

Calculating Variance

Variance is defined as the mean of the squares of the individual differences from the mean. To calculate the variance within a set of data, you would first calculate the mean by adding up all of your data points and then dividing the total by the number of data points being used. Then, the mean is subtracted from each data point individually, the result is squared, and the mean is taken again from these values.

Calculating Standard Deviation

The standard deviation is calculated simply by taking the square root of the value that you get when you calculate the variance. Programs like Microsoft Excel and SPSS calculate both the variance and the standard deviation for you, and are very useful when working with large data sets as they tend to be more accurate than calculating the standard deviation by hand.

Conceptual Difference Between Standard Deviation and Variance

The variance of a data set measures the mathematical dispersion of the data relative to the mean. However, though this value is theoretically correct, it is difficult to apply in a real-world sense because the values used to calculate it were squared. The standard deviation, as the square root of the variance gives a value that is in the same units as the original values, which makes it much easier to work with and easier to interpret in conjunction with the concept of the normal curve.

Standard Deviation, Variance and the Normal Curve

A normal curve is a theoretical graphical representation of a data set where the values are evenly distributed across the x-axis, with the majority of the values falling fairly close to the mean. On the the normal curve, one standard deviation away from the mean in either direction will represent 68 percent of the population being measured. In terms of variance, less variance in the data or population will result in more than 68 percent of the data falling within the first standard deviation, while more variance has the opposite effect.