Well, we have come to the part of the series you have all been waiting for: discussion of statistics. What, you are not as excited about it as I am? Statistics are not your favorite part of the study? I am frankly shocked! But seriously, I think this is the part that most people, both lay public and professionals, find off-putting. But fear not, for we will deconstruct it all in simple terms here. Or obfuscate further, one or the other.

So, let's begin with a motto of mine: If you have good results, you do not need fancy statistics. This goes along with the ideas in math and science that truth and computational beauty go hand in hand. So, if you see something very fancy that you have never heard of, be on guard for less than important results. This, of course, is just a rule of thumb, and, as such, will have exceptions.

The general questions I like to ask about statistics are 1). Are the analyses appropriate to the study question(s), and 2). Are the analyses optimal to the study question(s). The first thing to establish is the integrity and completeness of the data. If the authors enrolled 365 subjects but were only able to analyze 200 of them, this is suspicious. So, you should be able to discern how complete the dataset was, and how many analyzable cases there were. A simple litmus test is that if more than 15% of the enrolled cases did not have complete data for analysis or dropped out of the study for other reasons, the study becomes suspect for a selection bias. The greater the proportion of dropouts, the greater the suspicion.

Once you have established that the set is fairly complete, move on to the actual analyses. Here, first thing is first: the authors need to describe their study group(s); hence, descriptive statistics. Usually this includes so-called "baseline characteristics", consisting of demographics (age, gender, race), comorbidities (heart failure, lung disease, etc.), and some measure of the primary condition in question (e.g., pneumonia severity index [PSI] in a study of patients with pneumonia). Other relevant characteristics may be reported as well, and this is dependent on the study question. As you can imagine, categorical variables (once again, these are variables that have categories, like gender or death) are expressed as proportions or percentages, while continuous ones (those that are on a continuum, like age) are represented by their measures of central tendency.

It is important to understand the latter well. There are three major measures of central tendency: mean, median and mode. The mean is the sum of all individual values of a particular variable divided by the number of values. So, mean age among a group of 10 subjects would be calculated by adding all 10 individual ages and then dividing by 10. The median is the value that occurs in the middle of a distribution. So, if there are 25 subjects with ages ranging from 5 to 65, the median value is the one that occurs in subject number 13 when subjects are arranged in ascending or descending order by age. The mode, a measure used least frequently in clinical studies, signifies, somewhat paradoxically, the value in a distribution that occurs most frequently.

So, let's focus on the mean and the median. The mean is a good representation of the central value in a normal distribution. Also referred to as a bell curve (yes, because of its shape), or a Gaussian distribution, in this type of a distribution there are roughly equal numbers of points to the left and to the right of the mean value. It looks like this (from wikimedia.org):

For a distribution like the one above it hardly matters which central value is reported, the mean or the median, as they are the same or very similar to one another. Alas, most descriptors of human physiology are not normally distributed, but are more likely to be skewed. Skewed means that there is a tail at one end of the curve or the other (figure from here):

For example, in my world of health economics, many values for such variables as length of stay and costs spread out to the right of the center, similar to the blue curve in the right panel of the above figure. In this type of a distribution the mean and the median values are not the same, and they tell you different things. While the median gives you an idea of the central tendency of the entire distribution, the mean will tell you the central tendency of the majority of the distribution that is tightly clustered at the end opposite the tail. For a distribution similar to the one in the right panel, the mean will underestimate the central measure.

To round out the discussion of central values, we need to say a few words about scatter around these values. Because they represent a population and not a single individual, measures of central tendency will have some variation around them that is specific to the population. For a mean value, this variation is usually represented by standard deviation (SD), though sometimes you will see a 95% confidence interval as the measure of the scatter. Variation around the median is usually expressed as the range of values falling into the central one-half of all the values in the distribution, discarding the 25% at each end, or the interquartile range (IQR 25, 75) around the median. These values represent the stability and precision of our estimates and are important to look for in studies.

We'll end this discussion here for the moment. In the next post we will tackle inter-group differences and hypothesis testing.

Wow, great post.

ReplyDelete