Divergence analysis: Part 2

Part 2/2 on how to do a divergence analysis in R.

In the previous post on divergence analyses I covered a relatively straightforward way to figure out at which time points two responses diverge. We simply ran a bunch of t-tests and figured out at which time points the responses were significantly different, while correcting for the multiple comparisons. I noted that a downside of this method is the decrease in power. To address this limitation, we now turn to an alternative analysis called a bootstrapped cluster-based permutation analysis.

Again, we use the really cool eyetrackingR package for inspiration. They cover this analysis in depth on their website (see here). There’s also a nice paper by (???) that I recommend. I will describe how to do this using tidyverse functions, rather than the eyetrackingR package, because it helps me better understand what I’m doing.

Before we begin, let’s repeat the setup from part 1.


library(tidyverse)
library(broom)

word_recognition <- read_csv("../../static/data/word_recognition.csv", 
  col_types = list(Sex = "c"))

theme_set(theme_light())

word_recognition <- word_recognition %>%
  rename(
    time_from_trial_onset = TimeFromTrialOnset,
    participant = ParticipantName,
    trial = Trial,
    track_loss = TrackLoss,
    animate = Animate
  )

response_window <- filter(word_recognition, time_from_trial_onset >= 15500 & 
    time_from_trial_onset < 21000)

track_loss <- response_window %>%
  group_by(participant, trial) %>%
  summarize(
    n = n(), 
    missing = sum(track_loss)
    ) %>%
  mutate(missing_by_trial = missing / n)

response_time <- response_window %>%
  left_join(track_loss, by = c("participant", "trial")) %>%
  filter(missing_by_trial <= .25) %>%
  mutate(target = if_else(str_detect(trial, "(Spoon|Bottle)"), "Inanimate", 
    "Animate")) %>%
  mutate(
    time_bin = floor(time_from_trial_onset / 100),
    hit = if_else(AOI == "Animate", 1, 0)
  ) %>%
  group_by(participant, target, time_bin) %>%
  summarize(
    samples_in_AOI = sum(hit, na.rm = TRUE),
    samples_total = n() - sum(track_loss, na.rm = TRUE),
    proportion = samples_in_AOI / samples_total
  )

For this analysis we also need to repeat the multiple t-tests we performed in part 1, with a small but important change. As per the vignette on the eyetrackingR website, we performed independent t-tests rather than paired t-tests. Since the data is actually from a within-subjects design, we should conduct paired t-tests. This creates an extra complexity because we have missing data. We need to check for each participant and each time bin whether the ‘proportion’ response on the ‘Animate’ or ‘Inanimate’ trials is missing. If one of them is missing, we set both responses to NA and then exclude the time bins with a filter, like this:


response_time_subset <- response_time %>%
  group_by(participant, time_bin) %>%
  mutate(missing = min(proportion)) %>%
  filter(!is.na(missing)) %>%
  ungroup()

Now we can conduct the multiple paired t-tests.


tb_analysis <- response_time_subset %>%
  split(.$time_bin) %>%
  map(~ t.test(proportion ~ target, data = ., paired = TRUE)) %>%
  map_df(tidy, .id = "time_bin") %>%
  as_tibble()

To determine the clusters, we need to set a threshold for what we count as significant. We could simply use the p-value, but we’ll stick to the eyetrackingR vignette. In the vignette they use a threshold based on the t-statistic. Specifically, the threshold is determined by using the qt() function and giving it the desired alpha level and the correct degrees of freedom (based on the number of participants).


threshold_t <- qt(p = 1 - .05 / 2, df = 27 - 1)

Having set the threshold, we can determine the clusters.


tb_analysis <- mutate(tb_analysis,
  pass = if_else(abs(statistic) >= threshold_t, 1, 0),
  cluster = if_else(pass == 1 & lag(pass, default = 0) == 0, 1, 0),
  cluster = if_else(pass == 1, cumsum(cluster), NA_real_),
  time_bin = as.numeric(time_bin))

clusters <- tb_analysis %>%
  group_by(cluster) %>%
  summarize(
    start = min(time_bin),
    end = max(time_bin),
    sum_statistics = sum(statistic)
  ) %>%
  filter(!is.na(cluster))

The main question is whether these clusters are significant. This is where the next part comes in.

The idea is that we’ll shuffle the condition information (‘target’) randomly for each participant. This will destroy any effect that the condition may have, effectively producing a null effect. We then repeat the paired t-tests and again determine the clusters. We calculate the sum of the statistics for each cluster, get the largest cluster, and save it in a data frame called clusters_simulation. We repeat this a number of times, resulting in a distribution of clusters. This is our null distribution. We then compare our observed clusters (from the non-shuffled data) to the distribution of clusters. If our cluster is unlikely, we reject the null and consider it significant.

The code below demonstrates how this is done. It looks like quite a bit of code, but the majority is just a copy-paste from previous code. Some noteable bits are the for loop, the mutate() function in which we shuffle the condition variable, and the code to determine the largest cluster.


clusters_simulation <- data_frame()

for (i in 1:100) {
  clusters_simulation <- response_time_subset %>%
    group_by(participant) %>%
    mutate(target_shuffled = factor(target, 
      labels = sample(c("Animate", "Inanimate")))) %>%
    split(.$time_bin) %>%
    map(~ t.test(proportion ~ target_shuffled, data = ., paired = TRUE)) %>%
    map_df(tidy, .id = "time_bin") %>%
    mutate(
      pass = if_else(abs(statistic) >= threshold_t, 1, 0),
      cluster = if_else(pass == 1 & lag(pass, default = 0) == 0, 1, 0),
      cluster = if_else(pass == 1, cumsum(cluster), NA_real_)
    ) %>%
    group_by(cluster) %>%
    summarize(
      sum_statistics = sum(statistic)
    ) %>%
    mutate(
      sum_statistics = if_else(is.na(cluster), 0, sum_statistics),
      sum_statistics_abs = abs(sum_statistics)
    ) %>%
    summarize(
      max_sum = max(sum_statistics)
    ) %>%
    bind_rows(clusters_simulation)
}

As you can see, we loop over the code 100 times. This means we create a distribution of 100 clusters. This is not enough for a reliable analysis, but it’s enough to illustrate the method.

Note that we shuffle the ‘target’ variable into a new variable called ‘target_shuffle’ by randomly re-labelling it using factor(). This means that we do not fully randomize it. We simply randomize whether each participant’s ‘Animate’ and ‘Inanimate’ condition remains the same or whether they are flipped.

Finally, we calculate the sum of the t-values for each found cluster. If no clusters are found, we set the sum to 0. We then determine the cluster with the largest sum of t-values, and store it in the clusters_simulation data frame. The result is a distribution of the largest clusters. This distribution is displayed below, together with our observed clusters from the non-shuffled data.

To determine the significance, we simply count how many clusters are larger than our observed clusters, divided by the total number of clusters.


clusters <- clusters %>%
  rowwise() %>%
  mutate(
    n = sum(clusters_simulation$max_sum > sum_statistics | 
        clusters_simulation$max_sum < -sum_statistics),
    p = n / nrow(clusters_simulation)
  )

clusters

Source: local data frame [2 x 6]
Groups: <by row>

# A tibble: 2 x 6
  cluster start   end sum_statistics     n     p
    <dbl> <dbl> <dbl>          <dbl> <int> <dbl>
1       1   161   192          132.      0     0
2       2   194   207           42.3     0     0

Both of our clusters appear to be significant!

The nice thing about this method is that you can replace the t-test with any other kind of analysis you want to run, so it’s quite flexible. It’s also non-parametric and it controls for the multiple comparisons. Now, the code to run the analysis might seem a bit convoluted but it’s relatively straightforward to copy this code and write some custom functions to streamline it.

So that’s the bootstrapped cluster-based permutation analysis (a delightful mouthful) to determine divergences.

References