arrow_drop_up arrow_drop_down

Split half reliability

Split half reliability is a statement about how reliable the information collected is by comparing half of the test with the other half.

A little bit strange form of reliability is splitting the items of the test into two groups. There are several ways to make these groups, but most often two forms are used. The first one is splitting the items into the first half and the second half. The second way is to create two groups by putting the even items in one group and the odd items in the other.

Next these scores are compared: do the items in both lists have an equal mean, standard deviation, range, minimum, maximum, skewness, and kurtosis. All these comparisons can be tested with statistical tests. If there is no difference, the reliability seems to be okay.

The best indicator for split half reliability is to calculate the correlation between the mean scores of both halves. When the correlation coefficient is 1, a perfect split half reliability has been achieved. The greater the deviation from 1, the less reliable the score for split half reliability is.

Related topics to Split Half Reliability

  • Bias 
  • Reliability (general / overview) 
  • Test-retest reliability 
  • Test-test reliability 
  • Internal consistency (Cronbachs alfa) 
  • Interobserver reliability
  • Interviewer bias