# ‘Beyond bar and line graphs’

I came across an interesting paper earlier on data visualisation – published last year in Plos One, Weissgerber et al set out reasons why bar or line graphs can be misleading when presenting continuous data. It’s well worth a read:

Weissgerber TL, Milic NM, Winham SJ, Garovic VD (2015) Beyond Bar and Line Graphs: Time for a New Data Presentation Paradigm. PLoS Biol 13(4):e1002128. doi: 10.1371/journal.pbio.1002128

The authors also provide some Excel templates in the supplementary information for creating figures that are more useful than bar or line graphs, including scatterplots, paired plots, etc. I have taken their data and created those plots in ggplot2 (using some dplyr and knitr magic to make the data usable), and published that code over at RStudio’s excellent ‘Rpubs’ resource. Check it out here:

http://rpubs.com/tomhouslay/beyond-bar-line-graphs

# Understanding 3-way interactions between continuous and categorical variables, part ii: 2 cons, 1 cat

I posted recently (well… not that recently, now that I remember that time is linear) about how to visualise 3-way interactions between continuous and categorical variables (using 1 continuous and 2 categorical variables), which was a follow-up to my extraordinarily successful post on 3-way interactions between 3 continuous variables (by ‘extraordinarily successful’, I mean some people read it on purpose and not because they were misdirected by poorly-thought google search terms, which is what happens with the majority of my graphic insect-sex posts). I used ‘small multiples‘, and also predicting model fits when holding particular variables at distinct values.

ANYWAY…

I just had a comment on the recent post, and it got me thinking about combining these approaches: There are a number of approaches we can use here, so I’ll run through a couple of examples. First, however, we need to make up some fake data! I don’t know anything about bone / muscle stuff (let’s not delve too far into what a PhD in biology really means), so I’ve taken the liberty of just making up some crap that I thought might vaguely make sense. You can see here that I’ve also pretended we have a weirdly complete and non-overlapping set of data, with one observation of bone for every combination of muscle (continuous predictor), age (continuous covariate), and group (categorical covariate). Note that the libraries you’ll need for this script include {dplyr}, {broom}, and {ggplot2}.

```#### Create fake data ####

bone_dat <- data.frame(expand.grid(muscle = seq(50,99),
age = seq(18, 65),
groupA = c(0, 1)))

## Set up our coefficients to make the fake bone data
coef_int <- 250
coef_muscle <- 4.5
coef_age <- -1.3
coef_groupA <- -150
coef_muscle_age <- -0.07
coef_groupA_age <- -0.05
coef_groupA_muscle <- 0.3
coef_groupA_age_muscle <- 0.093

bone_dat <- bone_dat %>%
mutate(bone = coef_int +
(muscle * coef_muscle) +
(age * coef_age) +
(groupA * coef_groupA) +
(muscle * age * coef_muscle_age) +
(groupA * age * coef_groupA_age) +
(groupA * muscle * coef_groupA_muscle) +
(groupA * muscle * age * coef_groupA_age_muscle))

ggplot(bone_dat,
aes(x = bone)) +
geom_histogram(color = 'black',
fill = 'white') +
theme_classic() +
facet_grid(. ~ groupA)

noise <- rnorm(nrow(bone_dat), 0, 20)
bone_dat\$bone <- bone_dat\$bone + noise

#### Analyse ####

mod_bone <- lm(bone ~ muscle * age * groupA,
data = bone_dat)

plot(mod_bone)```

summary(mod_bone)

While I’ve added some noise to the fake data, it should be no surprise that our analysis shows some extremely strong effects of interactions… (!)

``` Call: lm(formula = bone ~ muscle * age * groupA, data = bone_dat)```

Residuals:
Min 1Q Median 3Q Max
-71.824 -13.632 0.114 13.760 70.821

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.382e+02 6.730e+00 35.402 < 2e-16 ***
muscle 4.636e+00 8.868e-02 52.272 < 2e-16 ***
age -9.350e-01 1.538e-01 -6.079 1.31e-09 ***
groupA -1.417e+02 9.517e+00 -14.888 < 2e-16 ***
muscle:age -7.444e-02 2.027e-03 -36.722 < 2e-16 ***
muscle:groupA 2.213e-01 1.254e-01 1.765 0.0777 .
age:groupA -3.594e-01 2.175e-01 -1.652 0.0985 .
muscle:age:groupA 9.632e-02 2.867e-03 33.599 < 2e-16 ***

Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 19.85 on 4792 degrees of freedom
Multiple R-squared: 0.9728, Adjusted R-squared: 0.9728
F-statistic: 2.451e+04 on 7 and 4792 DF, p-value: < 2.2e-16

(EDIT: Note that this post is only on how to visualise the results of your analysis; it is based on the assumption that you have done the initial data exploration and analysis steps yourself already, and are satisfied that you have the correct final model… I may write a post on this at a later date, but for now I’d recommend Zuur et al’s 2010 paper, ‘A protocol for data exploration to avoid common statistical problems‘. Or you should come on the stats course that Luc Bussière and I run).

Checking the residuals etc indicates (to nobody’s surprise) that everything is looking pretty glorious from our analysis. But how to actually interpret these interactions?

We shall definitely have to use small multiples, because otherwise we shall quickly become overwhelmed. One method is to use a ‘heatmap’ style approach; this lets us plot in the style of a 3D surface, where our predictors / covariates are on the axes, and different colour regions within parameter space represent higher or lower values. If this sounds like gibberish, it’s really quite simple to get when you see the plot: Here, higher values of bone are in lighter shades of blue, while lower values of bone are in darker shades. Moving horizontally, vertically or diagonally through combinations of muscle and age show you how bone changes; moreover, you can see how the relationships are different in different groups (i.e., the distinct facets).

To make this plot, I used one of my favourite new packages, ‘{broom}‘, in conjunction with the ever-glorious {ggplot2}. The code is amazingly simple, using broom’s ‘augment’ function to get predicted values from our linear regression model:

```mod_bone %>% augment() %>%
ggplot(., aes(x = muscle,
y = age,
fill = .fitted)) +
geom_tile() +
facet_grid(. ~ groupA) +
theme_classic()```

But note that one aspect of broom is that augment just adds predicted values (and other cool stuff, like standard errors around the prediction) to your original data frame. That means that if you didn’t have such a complete data set, you would be missing predicted values because you didn’t have those original combinations of variables in your data frame. For example, if we sample 50% of the fake data, modelled it in the same way and plotted it, we would get this: Not quite so pretty. There are ways around this (e.g. using ‘predict’ to fill all the gaps), but let’s move onto some different ways of visualising the data – not least because I still feel like it’s a little hard to get a handle on what’s really going on with these interactions.

A trick that we’ve seen before for looking at interactions between continuous variables is to look at only high/low values of one, across the whole range of another: in this case, we would show how bone changes with muscle in younger and older people separately. We could then use small multiples to view these relationships in distinct panels for each group (ethnic groups, in the example provided by the commenter above).

Here, I create a fake data set to use for predictions, where I have the full range of muscle (50:99), the full range of groups (0,1), and then age is at 1 standard deviation above or below the mean. The ‘expand.grid’ function simply creates every combination of these values for us! I use ‘predict’ to create predicted values from our linear model, and then add an additional variable to tell us whether the row is for a ‘young’ or ‘old’ person (this is really just for the sake of the legend):

```#### Plot high/low values of age covariate ####

bone_pred <- data.frame(expand.grid(muscle = seq(50, 99),
age = c(mean(bone_dat\$age) +
sd(bone_dat\$age),
mean(bone_dat\$age) -
sd(bone_dat\$age)),
groupA = c(0, 1)))

bone_pred <- cbind(bone_pred,
predict(mod_bone,
newdata = bone_pred,
interval = "confidence"))

bone_pred <- bone_pred %>%
mutate(ageGroup = ifelse(age > mean(bone_dat\$age), "Old", "Young"))

ggplot(bone_pred,
aes(x = muscle,
y = fit)) +
geom_line(aes(colour = ageGroup)) +
#   geom_point(data = bone_dat,
#              aes(x = muscle,
#                  y = bone)) +
facet_grid(. ~ groupA) +
theme_classic()```

This gives us the following figure: Here, we can quite clearly see how the relationship between muscle and bone depends on age, but that this dependency is different across groups. Cool! This is, of course, likely to be more extreme than you would find in your real data, but let’s not worry about subtlety here…

You’ll also note that I’ve commented out some lines in the specification of the plot. These show you how you would plot your raw data points onto this figure if you wanted to, but it doesn’t make a whole lot of sense here (as it would include all ages), and also our fake data set is so dense that it just obscures meaning. Good to have in your back pocket though!

Finally, what if we were more concerned with comparing the bone:muscle relationship of different groups against each other, and doing this at distinct ages? We could just switch things around, with each group a line on a single panel, with separate panels for ages. Just to make it interesting, let’s have three age groups this time: young (mean – 1SD), average (mean), old (mean + 1SD):

```#### Groups on a single plot, with facets for different age values ####

avAge <- round(mean(bone_dat\$age))
sdAge <- round(sd(bone_dat\$age))
youngAge <- avAge - sdAge
oldAge <- avAge + sdAge

bone_pred2 <- data.frame(expand.grid(muscle = seq(50, 99),
age = c(youngAge,
avAge,
oldAge),
groupA = c(0, 1)))

bone_pred2 <- cbind(bone_pred2,
predict(mod_bone,
newdata = bone_pred2,
interval = "confidence"))

ggplot(bone_pred2,
aes(x = muscle,
y = fit,
colour = factor(groupA))) +
geom_line() +
facet_grid(. ~ age) +
theme_classic()```

Created by Pretty R at inside-R.org

The code above gives us: Interestingly, I think this gives us the most insightful version yet. Bone increases with muscle, and does so at a higher rate for those in group A (i.e., group A == 1). The positive relationship between bone and muscle diminishes at higher ages, but this is only really evident in non-A individuals.

Taking a look at our table of coefficients again, this makes sense:

``` Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 2.382e+02 6.730e+00 35.402 < 2e-16 *** muscle 4.636e+00 8.868e-02 52.272 < 2e-16 *** age -9.350e-01 1.538e-01 -6.079 1.31e-09 *** groupA -1.417e+02 9.517e+00 -14.888 < 2e-16 *** muscle:age -7.444e-02 2.027e-03 -36.722 < 2e-16 *** muscle:groupA 2.213e-01 1.254e-01 1.765 0.0777 . age:groupA -3.594e-01 2.175e-01 -1.652 0.0985 . muscle:age:groupA 9.632e-02 2.867e-03 33.599 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1```

There is a positive interaction between group A x muscle x bone, which – in group A individuals – overrides the negative muscle x age interaction. The main effect of muscle is to increase bone mass (positive slope), while the main effect of age is to decrease it (in this particular visualisation, you can see this because there is essentially an age-related intercept that decreases along the panels).

These are just a few of the potential solutions, but I hope they also serve to indicate how taking the time to explore options can really help you figure out what’s going on in your analysis. Of course, you shouldn’t really believe these patterns if you can’t see them in your data in the first place though!

Unfortunately, I can’t help our poor reader with her decision to use Stata, but these things happen…

Note: if you like this sort of thing, why not sign up for the ‘Advancing in statistical modelling using R‘ workshop that I teach with Luc Bussière? Not only will you learn lots of cool stuff about regression (from straightforward linear models up to GLMMs), you’ll also learn tricks for manipulating and tidying data, plotting, and visualising your model fits! Also, it’s held on the bonny banks of Loch Lomond. It is delightful.

—-

Want to know more about understanding and visualising interactions in multiple linear regression? Check out my previous posts:

Understanding three-way interactions between continuous variables

Using small multiples to visualise three-way interactions between 1 continuous and 2 categorical variables

# Understanding 3-way interactions between continuous and categorical variables: small multiples

It can be pretty tricky to interpret the results of statistical analysis sometimes, and particularly so when just gazing at a table of regression coefficients that include multiple interactions. I wrote a post recently on visualising these interactions when variables are continuous; in that instance, I computed the effect of X on Y when the moderator variables (Z, W) are held constant at high and low values (which I set as +/- 1 standard deviation from the mean). This gave 4 different slopes to visualise: low Z & low W, low Z & high W, high Z & low W, and high Z & high W. It was simple enough to plot these on a single figure and see what effect the interaction of Z and W had on the relationship between Y and X. I had a comment underneath from someone who had a similar problem, but where the interacting variables consisted of a single continuous and 2 categorical variables (rather than 3 continuous variables). Given that we have distinct levels for the modifier variables Z and W then our job is made a little easier, as the analysis provides coefficients for every main effect and interaction. However, each of the categorical modifier variables here has 3 levels, giving 9 different combinations of Y ~ X. While we could take the same approach as last time (creating predicted slopes and plotting them on a single figure), that wouldn’t produce a very intuitive figure. Instead, let’s use ggplot2’s excellent ‘facets‘ function to produce multiple figures within a larger one.

This approach is termed ‘small multiples’, a term popularised by statistician and data visualisation guru Edward Tufte. I’ll hand it over to him to describe the thinking behind it:

“Illustrations of postage-stamp size are indexed by category or a label, sequenced over time like the frames of a movie, or ordered by a quantitative variable not used in the single image itself. Information slices are positioned within the eyespan, so that viewers make comparisons at a glance — uninterrupted visual reasoning. Constancy of design puts the emphasis on changes in data, not changes in data frames.”

–Edward Tufte, ‘Envisioning Information

Each small figure should have the same measures and scale; once you’ve understood the basis of the first figure, you can then move across the others and see how it responds to changes in a third variable (and then a fourth…). The technique is particularly useful for multivariate data because you can compare and contrast changes between the main relationship of interest (Y ~ X) as the values of other variables (Z, W) change. Sounds complex, but it’s really very simple and intuitive, as you’ll see below!

Ok, so the commenter on my previous post included the regression coefficients from her mixed model analysis that was specified as lmer(Y ~ X*Z*W+(1|PPX),data=Matrix). W and Z each have 3 levels: low, medium, and high, but these are to be treated as categorical rather than continuous – i.e., we get coefficients for each level. We’ll also disregard the random effects here, as we’re interested in plotting only the main effects.

I don’t have the raw data, so I’m simply going to plot the predicted slopes on values of X from 0 to 100; I’m first going to make this sequence, and enter all my coefficients as variables:

```x <- seq(-100, 100)

int_global <- 	-0.0293
X_coef	<- 0.0007
WHigh <-	-0.0357
WMedium <-	0.0092
ZHigh <-	-0.0491
ZMedium <-	-0.0314
X_WHigh <-	0.0007
X_WMedium <-	0.0002
X_ZHigh <-	0.0009
X_ZMedium <-	0.0007
WHigh_ZHigh <-	0.0004
WMedium_ZHigh <-	-0.0021
WHigh_ZMedium <-	-0.0955
WMedium_ZMedium <-	0.0143
X_WHigh_ZHigh <- -0.0002
X_WMedium_ZHigh <-	-0.0004
X_WHigh_ZMedium <-	0.0013
X_WMedium_ZMedium <-	-0.0004```

The reference values are Low Z and Low W; you can see that we only have coefficients for Medium/High values of these variables, so they are offset from the reference slope. The underscore in my variable names denotes an interaction; when X is involved then it’s an effect on the slope of Y on X, whereas otherwise it affects the intercept.

Let’s go ahead and predict Y values for each value of X when Z and W are both Low (i.e., using the global intercept and the coefficient for the effect of X on Y):

```y.WL_ZL <- int_global + (x * X_coef)

df.WL_ZL <- data.frame(x = x,
W = "W:Low",
Z = "Z:Low",
y = y.WL_ZL)```

Now, let’s change a single variable, and plot Y on X when Z is at ‘Medium’ level (still holding W at ‘Low’):

```# Change Z

y.WL_ZM <- int_global + ZMedium + (x * X_coef) + (x * X_ZMedium)

df.WL_ZM <- data.frame(x = x,
W = "W:Low",
Z = "Z:Medium",
y = y.WL_ZM)```

Remember, because the coefficients of Z/W are offsets, we add these effects on top of the reference levels. You’ll notice that I specified the level of Z and W as ‘Z:Low’ and ‘W:Low’; while putting the variable name in the value is redundant, you’ll see later why I’ve done this.

We can go ahead and make mini data frames for each of the different interactions:

```y.WL_ZH <- int_global + ZHigh + (x * X_coef) + (x * X_ZHigh)

df.WL_ZH <- data.frame(x = x,
W = "W:Low",
Z = "Z:High",
y = y.WL_ZM)

# Change W

y.WM_ZL <- int_global + WMedium + (x * X_coef) + (x * X_WMedium)

df.WM_ZL <- data.frame(x = x,
W = "W:Medium",
Z = "Z:Low",
y = y.WM_ZL)

y.WH_ZL <- int_global + WHigh + (x * X_coef) + (x * X_WHigh)

df.WH_ZL <- data.frame(x = x,
W = "W:High",
Z = "Z:Low",
y = y.WM_ZL)

# Change both

y.WM_ZM <- int_global + WMedium + ZMedium + WMedium_ZMedium +
(x * X_coef) +
(x * X_ZMedium) +
(x * X_WMedium) +
(x * X_WMedium_ZMedium)

df.WM_ZM <- data.frame(x = x,
W = "W:Medium",
Z = "Z:Medium",
y = y.WM_ZM)

y.WM_ZH <- int_global + WMedium + ZHigh + WMedium_ZHigh +
(x * X_coef) +
(x * X_ZHigh) +
(x * X_WMedium) +
(x * X_WMedium_ZHigh)

df.WM_ZH <- data.frame(x = x,
W = "W:Medium",
Z = "Z:High",
y = y.WM_ZH)

y.WH_ZM <- int_global + WHigh + ZMedium + WHigh_ZMedium +
(x * X_coef) +
(x * X_ZMedium) +
(x * X_WHigh) +
(x * X_WHigh_ZMedium)

df.WH_ZM <- data.frame(x = x,
W = "W:High",
Z = "Z:Medium",
y = y.WM_ZH)

y.WH_ZH <- int_global + WHigh + ZHigh + WHigh_ZHigh +
(x * X_coef) +
(x * X_ZHigh) +
(x * X_WHigh) +
(x * X_WHigh_ZHigh)

df.WH_ZH <- data.frame(x = x,
W = "W:High",
Z = "Z:High",
y = y.WH_ZH)```

Ok, so we now have a mini data frame giving predicted values of Y on X for each combination of Z and W. Not the most elegant solution, but fine for our purposes! Let’s go ahead and concatenate these data frames into one large one, and throw away the individual mini frames:

```# Concatenate data frames
df.XWZ <- rbind(df.WL_ZL,
df.WM_ZL,
df.WH_ZL,
df.WL_ZM,
df.WL_ZH,
df.WM_ZM,
df.WM_ZH,
df.WH_ZM,
df.WH_ZH)

# Remove individual frames
rm(df.WL_ZL,
df.WM_ZL,
df.WH_ZL,
df.WL_ZM,
df.WL_ZH,
df.WM_ZM,
df.WM_ZH,
df.WH_ZM,
df.WH_ZH)```

Now all that’s left to do is to plot the predicted regression slopes from the analysis!

```# Plot
library(ggplot2)

ggplot(df.XWZ, aes(x,y)) + geom_line() +
facet_grid(W~Z, as.table = FALSE) +
theme_bw()```

First, I call the ggplot2 library. The main function call to ggplot specifies our data frame (df.XWZ), and the variables to be plotted (x,y). Then, ‘geom_line()‘ indicates that I want the observations to be connected by lines. Next, ‘facet_grid‘ asks for this figure to be laid out in a grid of panels; ‘W~Z’ specifies that these panels are to be laid out in rows for values of W and columns for values of Z. Setting ‘as.table’ to FALSE simply means that the highest level of W is at the top, rather than the bottom (as would be the case if it were laid out in table format, but as a figure I find it more intuitive to have the highest level at the top). Finally, ‘theme_bw()’ just gives a nice plain theme that I prefer to ggplot2’s default settings. There we have it! A small multiples plot to show the results of a multiple regression analysis, with each panel showing the relationship between our response variable (Y) and continuous predictor (X) for a unique combination of our 3-level moderators (Z, W).

You’ll notice here that the facet labels have things like ‘Z:Low’ etc in them, which I coded into the data frames. This is purely because ggplot2 doesn’t automatically label the outer axes (i.e., as Z, W), and I find this easier than remembering which variable I’ve defined as rows/columns…

Hopefully this is clear – any questions, please leave a comment. The layout of R script was created by Pretty R at inside-R.org. Thanks again to Rebekah for her question on my previous post, and letting me use her analysis as an example here.

—-

Want to know more about understanding and visualising interactions in multiple linear regression? Check out my related posts:

Understanding three-way interactions between continuous variables

Three-way interactions between 2 continuous and 1 categorical variable

—-