Initialize repository
51
01_intro/exercises.md
Normal file
@ -0,0 +1,51 @@
|
||||
Exercise: Binomial test power simulation
|
||||
================
|
||||
|
||||
## Birth rates
|
||||
|
||||
Kanazawa (2007) claims that beautiful parents have more daughters
|
||||
|
||||
- Plan a study and calculate the sample size necessary to
|
||||
- detect a deviation from the global 106:100 male-female sex ratio
|
||||
- with about 80% power
|
||||
- Wanted: Substance-matter knowledge
|
||||
- What would be a minimum relevant deviation (effect)?
|
||||
- Considering the literature on birth rates, what would be a realistic
|
||||
deviation?
|
||||
- Some background
|
||||
- <https://en.wikipedia.org/wiki/Human_sex_ratio>
|
||||
- Literature cited there (e.g., Davis, Gottlieb, and Stampnitzky 1998;
|
||||
Mathews and Hamilton 2005)
|
||||
|
||||
### References
|
||||
|
||||
<div id="refs" class="references csl-bib-body hanging-indent">
|
||||
|
||||
<div id="ref-DavisGottlieb98" class="csl-entry">
|
||||
|
||||
Davis, D. L., M. B. Gottlieb, and J. R. Stampnitzky. 1998. “Reduced
|
||||
Ratio of Male to Female Births in Several Industrial Countries: A
|
||||
Sentinel Health Indicator?” *Journal of the American Medical
|
||||
Association* 279 (13): 1018–23.
|
||||
<https://doi.org/10.1001/jama.279.13.1018>.
|
||||
|
||||
</div>
|
||||
|
||||
<div id="ref-Kanazawa07" class="csl-entry">
|
||||
|
||||
Kanazawa, S. 2007. “Beautiful Parents Have More Daughters: A Further
|
||||
Implication of the Generalized Trivers–Willard Hypothesis (gTWH).”
|
||||
*Journal of Theoretical Biology* 244 (1): 133–40.
|
||||
<https://doi.org/10.1016/j.jtbi.2006.07.017>.
|
||||
|
||||
</div>
|
||||
|
||||
<div id="ref-MathewsHamilton05" class="csl-entry">
|
||||
|
||||
Mathews, T. J., and B. E. Hamilton. 2005. “Trend Analysis of the Sex
|
||||
Ratio at Birth in the United States.” *National Vital Statistics
|
||||
Reports* 53 (20): 1–20.
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
BIN
01_intro/powersim-intro.pdf
Normal file
59
02_ttest/exercises-ttest1.md
Normal file
@ -0,0 +1,59 @@
|
||||
Exercise: Power simulation and power curves for t-test
|
||||
================
|
||||
|
||||
## Temporal value asymmetry
|
||||
|
||||
“participants … were asked to imagine that they had agreed to spend 5 hr
|
||||
entering data into a computer and to indicate how much money it would be
|
||||
fair for them to receive. Some participants imagined that they had
|
||||
completed the work 1 month previously, and others imagined that they
|
||||
would complete the work 1 month in the future . . . Participants
|
||||
believed that they should receive 101% more money for work they would do
|
||||
1 month later ($M = \$125.04$) than for identical work that they had
|
||||
done 1 month previously ($M = \$62.20$), $t(119) = 2.22$, $p = .03$,
|
||||
$d = 0.41$” (Caruso, Gilbert, and Wilson 2008, 797)
|
||||
|
||||
### Plan a direct replication
|
||||
|
||||
1. What is a plausible standard deviation? Hint: $d = (M1 − M2)/SD$
|
||||
2. What is an interesting minimal effect size (in \$, Euro, or min)?
|
||||
3. Simulate responses for 120 participants in both the *past* and the
|
||||
*future* condition, assuming normal distributions with the same
|
||||
variance. Use the standard deviation and the minimal effect size
|
||||
from 1. and 2.
|
||||
4. Parameter recovery: Repeat the simulation from 3. 2000 times to
|
||||
re-estimate the parameters ($\mu_1, \mu_2, \sigma$) from the
|
||||
simulated responses. Visualize the recovered parameters in box
|
||||
plots.
|
||||
Hint: $SE = 2/\sqrt{n} \cdot SD$, where $n$ is the total sample
|
||||
size.
|
||||
|
||||
``` r
|
||||
t <- t.test(x, y, mu = 0, var.equal = TRUE)
|
||||
c(t$estimate, sd.pool = sqrt(n) / 2 * t$stderr)
|
||||
```
|
||||
|
||||
5. Power simulation: Increase the total sample size to find out the `n`
|
||||
necessary for 80% power for the t-test.
|
||||
6. Power curves:
|
||||
- Write an `R` function that takes sample size `n`, minimal effect
|
||||
`d`, standard deviation `sd`, and number of replications `nrep` as
|
||||
arguments. It should return the simulated power.
|
||||
- Use this function to simulate the power for each combination of 4
|
||||
different standard deviations and 4 sample sizes.
|
||||
- Visualize these power curves in a single plot.
|
||||
|
||||
### References
|
||||
|
||||
<div id="refs" class="references csl-bib-body hanging-indent">
|
||||
|
||||
<div id="ref-CarusoGilbert08" class="csl-entry">
|
||||
|
||||
Caruso, E. M., D. T. Gilbert, and T. D. Wilson. 2008. “A Wrinkle in
|
||||
Time: Asymmetric Valuation of Past and Future Events.” *Psychological
|
||||
Science* 19 (8): 796–801.
|
||||
<https://doi.org/10.1111/j.1467-9280.2008.02159.x>.
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
31
02_ttest/exercises-ttest2.md
Normal file
@ -0,0 +1,31 @@
|
||||
Exercise: Power simulation and power curves for t-test
|
||||
================
|
||||
|
||||
## Directed reading activities
|
||||
|
||||
Plan a replication of the “directed reading activities” study on
|
||||
<https://jasp-stats.github.io/jasp-data-library/myChapters/chapter_2.html>
|
||||
|
||||
1. Which parameter value represents the minimum relevant effect that
|
||||
the test should be able to detect? How large should this effect be?
|
||||
Briefly say what it means in the context of the study.
|
||||
|
||||
2. What would be a plausible value for the standard deviation?
|
||||
|
||||
3. Simulate and draw power curves that depend on the effect size and
|
||||
$n$. Use at least five different effect sizes and at least four
|
||||
different sample sizes. The minimum relevant effect size should be
|
||||
among the simulation conditions.
|
||||
|
||||
4. From inspecting the power curves, what $n$ in each group is required
|
||||
to detect the minimum relevant effect with at least 80% power?
|
||||
|
||||
5. Create a renderable R script or an R Markdown file that includes
|
||||
|
||||
- a header with title, author, date
|
||||
- at least one section head line
|
||||
- the homework questions and your answers
|
||||
- the simulation code and output
|
||||
- the plot of the power curves.
|
||||
|
||||
Render the R or Rmd file to HTML.
|
||||
90
02_ttest/powersim-ttest.R
Normal file
@ -0,0 +1,90 @@
|
||||
#' ---
|
||||
#' title: "Independent t-test: Power simulation and power curves"
|
||||
#' author: ""
|
||||
#' date: "Last modified: 2026-01-09"
|
||||
#' ---
|
||||
|
||||
|
||||
#' ## Application context
|
||||
|
||||
#'
|
||||
#' Listening experiment
|
||||
#'
|
||||
#' - Task of each participant is to repeatedly adjust the frequency of a
|
||||
#' comparison tone to sound equal in pitch to a 1000-Hz standard tone
|
||||
#' - Mean adjustment estimates the point of subjective equality $\mu$
|
||||
#' - Two participants will take part in the experiment providing adjustments
|
||||
#' $X$ and $Y$
|
||||
#' - Goal is to detect a difference between their points of subjective
|
||||
#' equality $\mu_x$ and $\mu_y$ of 4 Hz
|
||||
#'
|
||||
|
||||
#' ## Model
|
||||
|
||||
#'
|
||||
#' Assumptions
|
||||
#'
|
||||
#' - $X_1, \ldots, X_n \sim N(\mu_x, \sigma_x^2)$ i.i.d.
|
||||
#' - $Y_1, \ldots, Y_m \sim N(\mu_y, \sigma_y^2)$ i.i.d.
|
||||
#' - both samples independent
|
||||
#' - $\sigma_x^2 = \sigma_y^2$ but unknown
|
||||
#'
|
||||
#' Hypothesis
|
||||
#'
|
||||
#' - H$_0\colon~ \mu_x - \mu_y = \delta = 0$
|
||||
#'
|
||||
|
||||
#' ## Power simulation
|
||||
#+ cache = TRUE
|
||||
|
||||
n <- 110; m <- 110
|
||||
pval <- replicate(2000, {
|
||||
x <- rnorm(n, mean = 1000 + 4, sd = 10) # Participant 1 responses
|
||||
y <- rnorm(m, mean = 1000, sd = 10) # Participant 2 responses
|
||||
t.test(x, y, mu = 0, var.equal = TRUE)$p.value
|
||||
})
|
||||
mean(pval < 0.05)
|
||||
|
||||
#' ## Power curves
|
||||
#'
|
||||
#' Turn into a function of n and effect size
|
||||
|
||||
pwrFun <- function(n = 30, d = 4, sd = 10, nrep = 50) {
|
||||
n <- n; m <- n
|
||||
pval <- replicate(nrep, {
|
||||
x <- rnorm(n, mean = 1000 + d, sd = sd)
|
||||
y <- rnorm(m, mean = 1000, sd = sd)
|
||||
t.test(x, y, mu = 0, var.equal = TRUE)$p.value
|
||||
})
|
||||
mean(pval < 0.05)
|
||||
}
|
||||
|
||||
#'
|
||||
#' Set up conditions and call power function
|
||||
#+ cache = TRUE
|
||||
|
||||
cond <- expand.grid(d = 0:5,
|
||||
n = c(50, 75, 100, 125))
|
||||
cond$pwr <- mapply(pwrFun, n = cond$n, d = cond$d, MoreArgs = list(nrep = 500))
|
||||
|
||||
## Plot results
|
||||
lattice::xyplot(pwr ~ d, cond, groups = n, type = c("g", "b"),
|
||||
auto.key = list(corner = c(0, 1)))
|
||||
|
||||
#' ## Parameter recovery
|
||||
|
||||
n <- 100
|
||||
|
||||
out <- replicate(2000, {
|
||||
x <- rnorm(n, mean = 1000 + 4, sd = 10)
|
||||
y <- rnorm(n, mean = 1000, sd = 10)
|
||||
t <- t.test(x, y, mu = 0, var.equal = TRUE)
|
||||
c(
|
||||
effect = -as.numeric(diff(t$estimate)),
|
||||
sd.pool = t$stderr * sqrt(2*n) / 2
|
||||
)
|
||||
})
|
||||
|
||||
rowMeans(out)
|
||||
boxplot(t(out))
|
||||
|
||||
99
02_ttest/powersim-ttest.md
Normal file
@ -0,0 +1,99 @@
|
||||
Independent t-test: Power simulation and power curves
|
||||
================
|
||||
Last modified: 2026-01-09
|
||||
|
||||
## Application context
|
||||
|
||||
Listening experiment
|
||||
|
||||
- Task of each participant is to repeatedly adjust the frequency of a
|
||||
comparison tone to sound equal in pitch to a 1000-Hz standard tone
|
||||
- Mean adjustment estimates the point of subjective equality $\mu$
|
||||
- Two participants will take part in the experiment providing
|
||||
adjustments $X$ and $Y$
|
||||
- Goal is to detect a difference between their points of subjective
|
||||
equality $\mu_x$ and $\mu_y$ of 4 Hz
|
||||
|
||||
## Model
|
||||
|
||||
Assumptions
|
||||
|
||||
- $X_1, \ldots, X_n \sim N(\mu_x, \sigma_x^2)$ i.i.d.
|
||||
- $Y_1, \ldots, Y_m \sim N(\mu_y, \sigma_y^2)$ i.i.d.
|
||||
- both samples independent
|
||||
- $\sigma_x^2 = \sigma_y^2$ but unknown
|
||||
|
||||
Hypothesis
|
||||
|
||||
- H$_0\colon~ \mu_x - \mu_y = \delta = 0$
|
||||
|
||||
## Power simulation
|
||||
|
||||
``` r
|
||||
n <- 110; m <- 110
|
||||
pval <- replicate(2000, {
|
||||
x <- rnorm(n, mean = 1000 + 4, sd = 10) # Participant 1 responses
|
||||
y <- rnorm(m, mean = 1000, sd = 10) # Participant 2 responses
|
||||
t.test(x, y, mu = 0, var.equal = TRUE)$p.value
|
||||
})
|
||||
mean(pval < 0.05)
|
||||
```
|
||||
|
||||
## [1] 0.831
|
||||
|
||||
## Power curves
|
||||
|
||||
Turn into a function of n and effect size
|
||||
|
||||
``` r
|
||||
pwrFun <- function(n = 30, d = 4, sd = 10, nrep = 50) {
|
||||
n <- n; m <- n
|
||||
pval <- replicate(nrep, {
|
||||
x <- rnorm(n, mean = 1000 + d, sd = sd)
|
||||
y <- rnorm(m, mean = 1000, sd = sd)
|
||||
t.test(x, y, mu = 0, var.equal = TRUE)$p.value
|
||||
})
|
||||
mean(pval < 0.05)
|
||||
}
|
||||
```
|
||||
|
||||
Set up conditions and call power function
|
||||
|
||||
``` r
|
||||
cond <- expand.grid(d = 0:5,
|
||||
n = c(50, 75, 100, 125))
|
||||
cond$pwr <- mapply(pwrFun, n = cond$n, d = cond$d, MoreArgs = list(nrep = 500))
|
||||
|
||||
## Plot results
|
||||
lattice::xyplot(pwr ~ d, cond, groups = n, type = c("g", "b"),
|
||||
auto.key = list(corner = c(0, 1)))
|
||||
```
|
||||
|
||||
<!-- -->
|
||||
|
||||
## Parameter recovery
|
||||
|
||||
``` r
|
||||
n <- 100
|
||||
|
||||
out <- replicate(2000, {
|
||||
x <- rnorm(n, mean = 1000 + 4, sd = 10)
|
||||
y <- rnorm(n, mean = 1000, sd = 10)
|
||||
t <- t.test(x, y, mu = 0, var.equal = TRUE)
|
||||
c(
|
||||
effect = -as.numeric(diff(t$estimate)),
|
||||
sd.pool = t$stderr * sqrt(2*n) / 2
|
||||
)
|
||||
})
|
||||
|
||||
rowMeans(out)
|
||||
```
|
||||
|
||||
## effect sd.pool
|
||||
## 3.978400 9.993142
|
||||
|
||||
``` r
|
||||
boxplot(t(out))
|
||||
```
|
||||
|
||||
<!-- -->
|
||||
BIN
02_ttest/powersim-ttest_files/figure-gfm/unnamed-chunk-3-1.png
Normal file
|
After Width: | Height: | Size: 7.8 KiB |
BIN
02_ttest/powersim-ttest_files/figure-gfm/unnamed-chunk-4-1.png
Normal file
|
After Width: | Height: | Size: 3.3 KiB |
BIN
02_ttest/powersim-ttest_files/figure-gfm/unnamed-chunk-5-1.png
Normal file
|
After Width: | Height: | Size: 4.0 KiB |
BIN
02_ttest/powersim-ttest_files/figure-gfm/unnamed-chunk-5-2.png
Normal file
|
After Width: | Height: | Size: 3.9 KiB |
62
03_anova/exercises-anova.md
Normal file
@ -0,0 +1,62 @@
|
||||
Exercise: Testing the interaction in two-by-two ANOVA
|
||||
================
|
||||
|
||||
## Anchoring and adjustment
|
||||
|
||||
Items and anchor values (Jacowitz and Kahneman 1995)
|
||||
|
||||
- How tall is the largest coast redwood in the world? \[20, 168m\]
|
||||
- How many member states belong to the United Nations? \[14, 127
|
||||
members\]
|
||||
- How much km/h is the maximum speed of a house cat? \[11, 48km/h\]
|
||||
|
||||
### Research question
|
||||
|
||||
- Does time pressure (respond within 7s) increase the anchor effect?
|
||||
|
||||
### Suggest a minimum relevant effect
|
||||
|
||||
- Go to <https://apps.mathpsy.uni-tuebingen.de/fw/pars2eta/>
|
||||
- Fix the parameters of the ANOVA model
|
||||
|
||||
### Some background
|
||||
|
||||
- Open anchoring quest (Röseler et al. 2022), <https://osf.io/ygnvb/>
|
||||
|
||||
### Plan the study
|
||||
|
||||
- Pick one of the three items
|
||||
- Parameter recovery
|
||||
- Make a data frame for the two-by-two design
|
||||
- With the parameter values determined before, simulate responses
|
||||
- Re-estimate the parameters
|
||||
- Power simulation
|
||||
- Calculate the sample size necessary to detect the time-pressure
|
||||
effect
|
||||
|
||||
### Bonus task: Verify the plausibility of your model
|
||||
|
||||
- Download the raw data from the open anchoring quest project
|
||||
- Estimate $\sigma$ and compare it to your value
|
||||
|
||||
### References
|
||||
|
||||
<div id="refs" class="references csl-bib-body hanging-indent">
|
||||
|
||||
<div id="ref-JacowitzKahneman95" class="csl-entry">
|
||||
|
||||
Jacowitz, K. E., and D. Kahneman. 1995. “Measures of Anchoring in
|
||||
Estimation Tasks.” *Personality and Social Psychology Bulletin* 21 (11):
|
||||
1161–66. <https://doi.org/10.1177/01461672952111004>.
|
||||
|
||||
</div>
|
||||
|
||||
<div id="ref-RoeselerWeber22" class="csl-entry">
|
||||
|
||||
Röseler, L., L. Weber, K. A. C. Helgerth, E. Stich, M. Günther, P.
|
||||
Tegethoff, F. S. Wagner, et al. 2022. “OpAQ: Open Anchoring Quest,
|
||||
Version 1.1.50.97.” <https://doi.org/10.17605/OSF.IO/YGNVB>.
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
93
03_anova/powersim-anova.R
Normal file
@ -0,0 +1,93 @@
|
||||
#' ---
|
||||
#' title: "Two-by-two ANOVA: Power simulation of the interaction test"
|
||||
#' author: ""
|
||||
#' date: "Last modified: 2026-01-09"
|
||||
#' ---
|
||||
|
||||
#' ## Application context
|
||||
|
||||
#'
|
||||
#' Effect of fertilizers
|
||||
#'
|
||||
#' In an experiment, two fertilizers (A and B, each either low or high dose)
|
||||
#' will be combined and the yield of peas (Y) in kg be observed. Goal is to
|
||||
#' detect an increase of the Fertilizer-A effect by an additional 12 kg when
|
||||
#' combined with a high dose of Fertilizer B (interaction effect).
|
||||
#'
|
||||
|
||||
#+ echo = FALSE
|
||||
dat <- data.frame(
|
||||
A = rep(1:2, each = 2),
|
||||
B = rep(1:2, times = 2),
|
||||
y = c(30, 30 + 5, 30 + 30, 30 + 30 + 5 + 12)
|
||||
)
|
||||
par(mai = c(.6, .6, .1, .1), mgp = c(2, .7, 0))
|
||||
plot(y ~ A, dat, type = "n", xlim = c(0.8, 2.2), ylim = c(20, 80),
|
||||
xlab = "Fertilizer A", ylab = "Yield (kg)", xaxt = "n")
|
||||
lines(y ~ A, dat[dat$B == 1, ], col = "darkblue")
|
||||
lines(y ~ A, dat[dat$B == 2, ], col = "darkblue")
|
||||
lines(1:2, c(30 + 5, 30 + 30 + 5), lty = 2, col = "darkblue")
|
||||
axis(1, 1:2, c("low", "high"))
|
||||
arrows(2, 30 + 30 + 6, 2, 30 + 30 + 5 + 11, code = 3, length = 0.1,
|
||||
col = "darkgray")
|
||||
text(c(1, 2, 1, 2, 2.07), c(27, 55, 40, 80, 65 + 6),
|
||||
c(expression(mu), expression(mu + alpha[2]),
|
||||
expression(mu + beta[2]),
|
||||
expression(mu + alpha[2] + beta[2] + (alpha * beta)[22]),
|
||||
"12 kg")
|
||||
)
|
||||
text(1.5, 27 + 30/2, "Fertilizer B: low", srt = 21, col = "darkgray")
|
||||
text(1.5, 38 + (30 + 12)/2, "Fertilizer B: high", srt = 32, col = "darkgray")
|
||||
|
||||
#' ## Model
|
||||
|
||||
#'
|
||||
#' Assumptions
|
||||
#'
|
||||
#' - $Y_{ijk} = \mu + \alpha_i + \beta_j + (\alpha\beta)_{ij} +
|
||||
#' \varepsilon_{ijk}$
|
||||
#' - $\varepsilon_{ijk} \sim N(0, \sigma^2) \text{ i.i.d.}$
|
||||
#' - $i = 1, \dots, I$; $j = 1, \dots, J$; $k = 1, \dots, K$
|
||||
#' - $\alpha_1 = \beta_1 := 0$
|
||||
#'
|
||||
#' Hypothesis
|
||||
#'
|
||||
#' - H$_0^{AB}\colon~ (\alpha\beta)_{ij} = 0 \text{ for all } i,j$
|
||||
#'
|
||||
|
||||
#' ## Setup
|
||||
|
||||
set.seed(1704)
|
||||
n <- 96
|
||||
dat <- data.frame(
|
||||
A = factor(rep(1:2, each = n/2), labels = c("low", "high")),
|
||||
B = factor(rep(rep(1:2, each = n/4), 2), labels = c("low", "high"))
|
||||
)
|
||||
X <- model.matrix(~ A*B, dat)
|
||||
unique(X)
|
||||
beta <- c(mu = 30, a2 = 30, b2 = 5, ab22 = 12)
|
||||
means <- X %*% beta
|
||||
|
||||
lattice::xyplot(I(means + rnorm(n, sd = 10)) ~ A, dat, groups = B,
|
||||
type = c("g", "p", "a"), auto.key = TRUE, ylab = "Yield (kg)")
|
||||
|
||||
#' ## Parameter recovery
|
||||
#+ cache = TRUE
|
||||
|
||||
out <- replicate(2000, {
|
||||
y <- means + rnorm(n, sd = 10) # y = mu + a + b + ab + e
|
||||
m <- aov(y ~ A * B, dat)
|
||||
c(coef(m), sigma = sigma(m))
|
||||
})
|
||||
boxplot(t(out))
|
||||
|
||||
#' ## Power simulation
|
||||
#+ cache = TRUE
|
||||
|
||||
pval <- replicate(2000, {
|
||||
y <- means + rnorm(n, sd = 10)
|
||||
m <- aov(y ~ A*B, dat)
|
||||
summary(m)[[1]]$"Pr(>F)"[3] # test of interaction
|
||||
})
|
||||
mean(pval < 0.05)
|
||||
|
||||
82
03_anova/powersim-anova.md
Normal file
@ -0,0 +1,82 @@
|
||||
Two-by-two ANOVA: Power simulation of the interaction test
|
||||
================
|
||||
Last modified: 2026-01-09
|
||||
|
||||
## Application context
|
||||
|
||||
Effect of fertilizers
|
||||
|
||||
In an experiment, two fertilizers (A and B, each either low or high
|
||||
dose) will be combined and the yield of peas (Y) in kg be observed. Goal
|
||||
is to detect an increase of the Fertilizer-A effect by an additional 12
|
||||
kg when combined with a high dose of Fertilizer B (interaction effect).
|
||||
|
||||
<!-- -->
|
||||
|
||||
## Model
|
||||
|
||||
Assumptions
|
||||
|
||||
- $Y_{ijk} = \mu + \alpha_i + \beta_j + (\alpha\beta)_{ij} + \varepsilon_{ijk}$
|
||||
- $\varepsilon_{ijk} \sim N(0, \sigma^2) \text{ i.i.d.}$
|
||||
- $i = 1, \dots, I$; $j = 1, \dots, J$; $k = 1, \dots, K$
|
||||
- $\alpha_1 = \beta_1 := 0$
|
||||
|
||||
Hypothesis
|
||||
|
||||
- H$_0^{AB}\colon~ (\alpha\beta)_{ij} = 0 \text{ for all } i,j$
|
||||
|
||||
## Setup
|
||||
|
||||
``` r
|
||||
set.seed(1704)
|
||||
n <- 96
|
||||
dat <- data.frame(
|
||||
A = factor(rep(1:2, each = n/2), labels = c("low", "high")),
|
||||
B = factor(rep(rep(1:2, each = n/4), 2), labels = c("low", "high"))
|
||||
)
|
||||
X <- model.matrix(~ A*B, dat)
|
||||
unique(X)
|
||||
```
|
||||
|
||||
## (Intercept) Ahigh Bhigh Ahigh:Bhigh
|
||||
## 1 1 0 0 0
|
||||
## 25 1 0 1 0
|
||||
## 49 1 1 0 0
|
||||
## 73 1 1 1 1
|
||||
|
||||
``` r
|
||||
beta <- c(mu = 30, a2 = 30, b2 = 5, ab22 = 12)
|
||||
means <- X %*% beta
|
||||
|
||||
lattice::xyplot(I(means + rnorm(n, sd = 10)) ~ A, dat, groups = B,
|
||||
type = c("g", "p", "a"), auto.key = TRUE, ylab = "Yield (kg)")
|
||||
```
|
||||
|
||||
<!-- -->
|
||||
|
||||
## Parameter recovery
|
||||
|
||||
``` r
|
||||
out <- replicate(2000, {
|
||||
y <- means + rnorm(n, sd = 10) # y = mu + a + b + ab + e
|
||||
m <- aov(y ~ A * B, dat)
|
||||
c(coef(m), sigma = sigma(m))
|
||||
})
|
||||
boxplot(t(out))
|
||||
```
|
||||
|
||||
<!-- -->
|
||||
|
||||
## Power simulation
|
||||
|
||||
``` r
|
||||
pval <- replicate(2000, {
|
||||
y <- means + rnorm(n, sd = 10)
|
||||
m <- aov(y ~ A*B, dat)
|
||||
summary(m)[[1]]$"Pr(>F)"[3] # test of interaction
|
||||
})
|
||||
mean(pval < 0.05)
|
||||
```
|
||||
|
||||
## [1] 0.817
|
||||
BIN
03_anova/powersim-anova_files/figure-gfm/unnamed-chunk-1-1.png
Normal file
|
After Width: | Height: | Size: 8.4 KiB |
BIN
03_anova/powersim-anova_files/figure-gfm/unnamed-chunk-2-1.png
Normal file
|
After Width: | Height: | Size: 5.5 KiB |
BIN
03_anova/powersim-anova_files/figure-gfm/unnamed-chunk-3-1.png
Normal file
|
After Width: | Height: | Size: 5.1 KiB |
33
04_ancova/exercises-ancova1.md
Normal file
@ -0,0 +1,33 @@
|
||||
Exercise: Analysis and power simulation for baseline/follow-up
|
||||
measurements
|
||||
================
|
||||
|
||||
## Shoulder pain and acupuncture
|
||||
|
||||
1. Reanalyze the original data
|
||||
- Re-estimate the ANCOVA model for the Kleinhenz et al. (1999)
|
||||
[data](../data/kleinhenz.txt)
|
||||
2. Run a power simulation for a replication study
|
||||
1. Draw plausible pre-CMS values
|
||||
2. Specify the minimum relevant average treatment effect (ATE)
|
||||
3. Set the remaining parameters to plausible values
|
||||
4. What is the sample size required for the test to detect the
|
||||
effect with 80% power?
|
||||
5. How robust is the power simulation when you repeat it with a new
|
||||
set of pre-CMS values? Try it!
|
||||
6. Recover the parameters of the ANCOVA model
|
||||
|
||||
### References
|
||||
|
||||
<div id="refs" class="references csl-bib-body hanging-indent">
|
||||
|
||||
<div id="ref-KleinhenzStreitberger99" class="csl-entry">
|
||||
|
||||
Kleinhenz, J., K. Streitberger, J. Windeler, A. Güßbacher, G. Mavridis,
|
||||
and E. Martin. 1999. “Randomised Clinical Trial Comparing the Effects of
|
||||
Acupuncture and a Newly Designed Placebo Needle in Rotator Cuff
|
||||
Tendinitis.” *Pain* 83 (2): 235–41.
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
45
04_ancova/exercises-ancova2.md
Normal file
@ -0,0 +1,45 @@
|
||||
Exercise: Analysis and power simulation for baseline/follow-up
|
||||
measurements
|
||||
================
|
||||
|
||||
## MASS anorexia data
|
||||
|
||||
1. Analyze the original data:
|
||||
- In R, see ?MASS::anorexia
|
||||
- Data preparation
|
||||
|
||||
``` r
|
||||
data(anorexia, package = "MASS")
|
||||
dat <-
|
||||
subset(anorexia, Treat != "Cont") |> # exclude control group
|
||||
droplevels() # drop empty factor levels
|
||||
lbs2kg <- 0.4535924
|
||||
dat$Prewt <- lbs2kg * dat$Prewt # to kg
|
||||
dat$Postwt <- lbs2kg * dat$Postwt
|
||||
lattice::xyplot(Postwt ~ Prewt, dat, groups = Treat,
|
||||
type = c("g", "r", "p"), auto.key = TRUE)
|
||||
```
|
||||
|
||||
- Estimate the average treatment effect (ATE) for FT relative to CBT.
|
||||
- What is the 95% CI for the ATE?
|
||||
- What are the pre- and post-weight means for the two groups?
|
||||
- What are the baseline-adjusted means for the two groups?
|
||||
|
||||
2. Run a power simulation for a replication study:
|
||||
- Draw plausible pre-weights.
|
||||
- Specify the minimum relevant effect.
|
||||
- Set the remaining parameters to plausible values.
|
||||
- What is the sample size required for the test to detect the effect
|
||||
with 80% power?
|
||||
- How robust is the power simulation when you repeat it with a new
|
||||
set of pre-weights? Try it!
|
||||
- Recover the parameters of the ANCOVA model.
|
||||
3. Create a renderable R script or an R Markdown file that includes
|
||||
- a header with title, author, date
|
||||
- at least one section head line
|
||||
- the homework questions and your answers
|
||||
- the R code, output, and plots (if any)
|
||||
|
||||
Render the R or Rmd file to HTML.
|
||||
|
||||
### Reference
|
||||
BIN
04_ancova/intro-ancova.pdf
Normal file
50
05_mixed1/exercises-pwrlmm.md
Normal file
@ -0,0 +1,50 @@
|
||||
Exercise: Power simulation for longitudinal data
|
||||
================
|
||||
|
||||
## Risperidone vs. haloperidol and schizophrenia
|
||||
|
||||
``` r
|
||||
dat <- read.table("../data/moeller.csv", header = TRUE, sep = ",")
|
||||
dat$id <- factor(dat$id)
|
||||
dat$treat <- factor(dat$treat, levels = c("risp", "halo"))
|
||||
|
||||
lattice::xyplot(pans ~ week, data = dat, groups = treat, type = c("g", "p", "a"), auto.key = TRUE)
|
||||
```
|
||||
|
||||
1) Analyze the original data from [moeller.csv](../data/moeller.csv):
|
||||
- `pans`: Positive and Negative Symptom Scale for schizophrenia
|
||||
|
||||
- `treat`: medication group
|
||||
|
||||
- `risp`: atypical neuroleptic risperidone
|
||||
- `halo`: conventional neuroleptic haloperidol
|
||||
|
||||
- What is the sample size in each treatment group?
|
||||
|
||||
- Estimate the by-group random-slope model.
|
||||
|
||||
- What are the estimates for the fixed effects and variance
|
||||
components?
|
||||
|
||||
- Interpret the interaction effect.
|
||||
|
||||
- Test the interaction effect.
|
||||
2) Run a power simulation for a replication study:
|
||||
- Set up a data frame containing the study design and sample size.
|
||||
|
||||
- Specify the minimum relevant effect.
|
||||
|
||||
- Set the fixed effects and variance components to plausible values.
|
||||
|
||||
- How many participants are required for the test of the interaction
|
||||
to detect the specified effect with a power of 80%?
|
||||
|
||||
- Recover the parameters of the by-group random-slope model for one
|
||||
simulated data set.
|
||||
3) Create a renderable R script or an R Markdown file that includes
|
||||
- a header with title, author, date
|
||||
- at least one section head line
|
||||
- the questions from above and your answers
|
||||
- the R code, output, and plots (if any)
|
||||
|
||||
Render the R or Rmd file to HTML.
|
||||
BIN
05_mixed1/intro-lmm.pdf
Normal file
185
05_mixed1/powersim-lmm.R
Normal file
@ -0,0 +1,185 @@
|
||||
#' ---
|
||||
#' title: "Power for mixed-effects models"
|
||||
#' author: ""
|
||||
#' date: "Last modified: 2026-01-09"
|
||||
#' bibliography: ../lit.bib
|
||||
#' ---
|
||||
|
||||
library(lattice)
|
||||
library(lme4)
|
||||
|
||||
#' # Reanalysis
|
||||
|
||||
#' ## Application context: Depression and type of diagnosis
|
||||
#'
|
||||
#' - @ReisbyGram77 studied the effect of Imipramin on 66 inpatients
|
||||
#' treated for depression
|
||||
#' - Depression was measured with the Hamilton depression rating scale
|
||||
#' - Patients were classified into endogenous and non-endogenous depressed
|
||||
#' - Depression was measured weekly for 6 time points
|
||||
#'
|
||||
#' Data: [reisby.txt](../data/reisby.txt)
|
||||
|
||||
dat <- read.table("../data/reisby.txt", header = TRUE)
|
||||
dat$id <- factor(dat$id)
|
||||
dat$diag <- factor(dat$diag, levels = c("nonen", "endog"))
|
||||
dat <- na.omit(dat) # drop missing values
|
||||
head(dat, n = 13)
|
||||
|
||||
xyplot(hamd ~ week | id, data = dat, type=c("g", "r", "p"),
|
||||
pch = 16, layout = c(11, 6), ylab = "HDRS score", xlab = "Time (week)")
|
||||
|
||||
|
||||
#' ## Random-intercept model
|
||||
#'
|
||||
#' $$
|
||||
#' \begin{aligned}
|
||||
#' Y_{ij} &= \beta_0 + \beta_1 \, \mathtt{week}_{ij}
|
||||
#' + \upsilon_{0i}
|
||||
#' + \varepsilon_{ij} \\
|
||||
#' \upsilon_{0i} &\sim N(0, \sigma^2_{\upsilon_0}) \text{ i.i.d.} \\
|
||||
#' \mathbf{\varepsilon}_i &\sim N(0, \, \sigma^2) \text{ i.i.d.} \\
|
||||
#' i &= 1, \ldots, I, \quad j = 1, \ldots n_i
|
||||
#' \end{aligned}
|
||||
#' $$
|
||||
|
||||
m1 <- lmer(hamd ~ week + (1 | id), data = dat, REML = FALSE)
|
||||
summary(m1)
|
||||
|
||||
#' ## Random-slope model
|
||||
#'
|
||||
#' $$
|
||||
#' \begin{aligned}
|
||||
#' Y_{ij} &= \beta_0 + \beta_1 \, \mathtt{week}_{ij}
|
||||
#' + \upsilon_{0i} + \upsilon_{1i}\, \mathtt{week}_{ij}
|
||||
#' + \varepsilon_{ij} \\
|
||||
#' \begin{pmatrix} \upsilon_{0i}\\ \upsilon_{1i} \end{pmatrix} &\sim
|
||||
#' N \left(\begin{pmatrix} 0\\ 0 \end{pmatrix}, \,
|
||||
#' \mathbf{\Sigma}_\upsilon =
|
||||
#' \begin{pmatrix}
|
||||
#' \sigma^2_{\upsilon_0} & \sigma_{\upsilon_0 \upsilon_1} \\
|
||||
#' \sigma_{\upsilon_0 \upsilon_1} & \sigma^2_{\upsilon_1} \\
|
||||
#' \end{pmatrix} \right)
|
||||
#' \text{ i.i.d.} \\
|
||||
#' \mathbf{\varepsilon}_i &\sim N(\mathbf{0}, \, \sigma^2 \mathbf{I}_{n_i})
|
||||
#' \text{ i.i.d.} \\
|
||||
#' i &= 1, \ldots, I, \quad j = 1, \ldots n_i
|
||||
#' \end{aligned}
|
||||
#' $$
|
||||
|
||||
m2 <- lmer(hamd ~ week + (week | id), data = dat, REML = FALSE)
|
||||
summary(m2)
|
||||
|
||||
#' ## Partial pooling
|
||||
|
||||
#+ fig.height = 10, fig.width = 6.5, fig.aling = "center"
|
||||
indiv <- unlist(
|
||||
sapply(unique(dat$id),
|
||||
function(i) predict(lm(hamd ~ week, dat[dat$id == i, ])))
|
||||
)
|
||||
|
||||
xyplot(hamd + predict(m2, re.form = ~ 0) + predict(m2) + indiv ~ week | id,
|
||||
data = dat, type = c("p", "l", "l", "l"), pch = 16, grid = TRUE,
|
||||
distribute.type = TRUE, layout = c(11, 6), ylab = "HDRS score",
|
||||
xlab = "Time (week)",
|
||||
# customize colors
|
||||
col = c("#434F4F", "#3CB4DC", "#FF6900", "#78004B"),
|
||||
# add legend
|
||||
key = list(space = "top", columns = 3,
|
||||
text = list(c("Population", "Mixed model", "Within-subject")),
|
||||
lines = list(col = c("#3CB4DC", "#FF6900", "#78004B")))
|
||||
)
|
||||
|
||||
#' ## By-group random-slope model
|
||||
|
||||
m3 <- lmer(hamd ~ week + diag + (week | id), data = dat, REML = FALSE)
|
||||
m4 <- lmer(hamd ~ week * diag + (week | id), data = dat, REML = FALSE)
|
||||
anova(m3, m4)
|
||||
|
||||
#' ## Means and predicted HDRS score by group
|
||||
|
||||
dat2 <- aggregate(hamd ~ week + diag, dat, mean)
|
||||
dat2$m4 <- predict(m4, newdata = dat2, re.form = ~ 0)
|
||||
|
||||
plot(m4 ~ week, dat2[dat2$diag == "endog", ], type = "l",
|
||||
ylim=c(0, 28), xlab="Week", ylab = "HDRS score")
|
||||
lines(m4 ~ week, dat2[dat2$diag == "nonen", ], lty = 2)
|
||||
points(hamd ~ week, dat2[dat2$diag == "endog", ], pch = 16)
|
||||
points(hamd ~ week, dat2[dat2$diag == "nonen", ], pch = 21, bg = "white")
|
||||
legend("topright", c("Endogenous", "Non endogenous"),
|
||||
lty = 1:2, pch = c(16, 21), pt.bg = "white", bty = "n")
|
||||
|
||||
#' ## Gory details
|
||||
|
||||
fixef(m4)
|
||||
getME(m4, "theta")
|
||||
t(chol(VarCorr(m4)$id))[lower.tri(diag(2), diag = TRUE)] / sigma(m4)
|
||||
|
||||
#' # Power simulation
|
||||
|
||||
#' ## Setup
|
||||
|
||||
## Study design and sample sizes
|
||||
n_week <- 6
|
||||
n_subj <- 80
|
||||
n <- n_week * n_subj
|
||||
|
||||
dat <- data.frame(
|
||||
id = factor(rep(seq_len(n_subj), each = n_week)),
|
||||
week = rep(0:(n_week - 1), times = n_subj),
|
||||
treat = factor(rep(0:1, each = n/2), labels = c("ctr", "trt"))
|
||||
)
|
||||
|
||||
## Fixed effects and variance components
|
||||
beta <- c(22, -2, 0, -1)
|
||||
Su <- matrix(c(12, -1.5, -1.5, 2), nrow = 2)
|
||||
se <- 3.5
|
||||
|
||||
#' ## Power
|
||||
|
||||
#+ cache = TRUE, warning = FALSE
|
||||
pval <- replicate(200, {
|
||||
|
||||
# Data generation
|
||||
means <- model.matrix( ~ week * treat, dat) %*% beta
|
||||
ranu <- MASS::mvrnorm(n_subj, mu = c(0, 0), Sigma = Su)
|
||||
e <- rnorm(n_subj * n_week, mean = 0, sd = se)
|
||||
|
||||
y <- means + ranu[dat$id, 1] + ranu[dat$id, 2] * dat$week + e
|
||||
|
||||
# Fitting model to test H0
|
||||
m0 <- lmer(y ~ week + treat + (1 + week | id), data = dat, REML = FALSE)
|
||||
m1 <- lmer(y ~ week * treat + (1 + week | id), data = dat, REML = FALSE)
|
||||
anova(m0, m1)["m1", "Pr(>Chisq)"]
|
||||
}
|
||||
)
|
||||
|
||||
mean(pval < 0.05)
|
||||
|
||||
#' ## Parameter recovery
|
||||
|
||||
#+ cache = TRUE, warning = FALSE
|
||||
par <- replicate(200, {
|
||||
|
||||
means <- model.matrix( ~ week * treat, dat) %*% beta
|
||||
ranu <- MASS::mvrnorm(n_subj, mu = c(0, 0), Sigma = Su)
|
||||
e <- rnorm(n_subj * n_week, mean = 0, sd = se)
|
||||
|
||||
y <- means + ranu[dat$id, 1] + ranu[dat$id, 2] * dat$week + e
|
||||
|
||||
m1 <- lmer(y ~ week * treat + (1 + week | id), data = dat, REML = FALSE)
|
||||
list(fixef = fixef(m1), theta = getME(m1, "theta"), sigma = sigma(m1))
|
||||
}, simplify = FALSE
|
||||
)
|
||||
|
||||
rowMeans(sapply(par, function(x) x$fixef))
|
||||
rowMeans(sapply(par, function(x) x$theta))
|
||||
mean(sapply(par, function(x) x$sigma))
|
||||
|
||||
beta
|
||||
Lt <- chol(Su)
|
||||
t(Lt)[lower.tri(Lt, diag = TRUE)] / se
|
||||
se
|
||||
|
||||
#' ### References
|
||||
|
||||
340
05_mixed1/powersim-lmm.md
Normal file
@ -0,0 +1,340 @@
|
||||
Power for mixed-effects models
|
||||
================
|
||||
Last modified: 2026-01-09
|
||||
|
||||
``` r
|
||||
library(lattice)
|
||||
library(lme4)
|
||||
```
|
||||
|
||||
# Reanalysis
|
||||
|
||||
## Application context: Depression and type of diagnosis
|
||||
|
||||
- Reisby et al. (1977) studied the effect of Imipramin on 66 inpatients
|
||||
treated for depression
|
||||
- Depression was measured with the Hamilton depression rating scale
|
||||
- Patients were classified into endogenous and non-endogenous depressed
|
||||
- Depression was measured weekly for 6 time points
|
||||
|
||||
Data: [reisby.txt](../data/reisby.txt)
|
||||
|
||||
``` r
|
||||
dat <- read.table("../data/reisby.txt", header = TRUE)
|
||||
dat$id <- factor(dat$id)
|
||||
dat$diag <- factor(dat$diag, levels = c("nonen", "endog"))
|
||||
dat <- na.omit(dat) # drop missing values
|
||||
head(dat, n = 13)
|
||||
```
|
||||
|
||||
## id hamd week diag endweek
|
||||
## 1 101 26 0 nonen 0
|
||||
## 2 101 22 1 nonen 0
|
||||
## 3 101 18 2 nonen 0
|
||||
## 4 101 7 3 nonen 0
|
||||
## 5 101 4 4 nonen 0
|
||||
## 6 101 3 5 nonen 0
|
||||
## 7 103 33 0 nonen 0
|
||||
## 8 103 24 1 nonen 0
|
||||
## 9 103 15 2 nonen 0
|
||||
## 10 103 24 3 nonen 0
|
||||
## 11 103 15 4 nonen 0
|
||||
## 12 103 13 5 nonen 0
|
||||
## 13 104 29 0 endog 0
|
||||
|
||||
``` r
|
||||
xyplot(hamd ~ week | id, data = dat, type=c("g", "r", "p"),
|
||||
pch = 16, layout = c(11, 6), ylab = "HDRS score", xlab = "Time (week)")
|
||||
```
|
||||
|
||||
<!-- -->
|
||||
|
||||
## Random-intercept model
|
||||
|
||||
$$
|
||||
\begin{aligned}
|
||||
Y_{ij} &= \beta_0 + \beta_1 \, \mathtt{week}_{ij}
|
||||
+ \upsilon_{0i}
|
||||
+ \varepsilon_{ij} \\
|
||||
\upsilon_{0i} &\sim N(0, \sigma^2_{\upsilon_0}) \text{ i.i.d.} \\
|
||||
\mathbf{\varepsilon}_i &\sim N(0, \, \sigma^2) \text{ i.i.d.} \\
|
||||
i &= 1, \ldots, I, \quad j = 1, \ldots n_i
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||
``` r
|
||||
m1 <- lmer(hamd ~ week + (1 | id), data = dat, REML = FALSE)
|
||||
summary(m1)
|
||||
```
|
||||
|
||||
## Linear mixed model fit by maximum likelihood ['lmerMod']
|
||||
## Formula: hamd ~ week + (1 | id)
|
||||
## Data: dat
|
||||
##
|
||||
## AIC BIC logLik -2*log(L) df.resid
|
||||
## 2293.2 2308.9 -1142.6 2285.2 371
|
||||
##
|
||||
## Scaled residuals:
|
||||
## Min 1Q Median 3Q Max
|
||||
## -3.1739 -0.5876 -0.0342 0.5465 3.5297
|
||||
##
|
||||
## Random effects:
|
||||
## Groups Name Variance Std.Dev.
|
||||
## id (Intercept) 16.16 4.019
|
||||
## Residual 19.04 4.363
|
||||
## Number of obs: 375, groups: id, 66
|
||||
##
|
||||
## Fixed effects:
|
||||
## Estimate Std. Error t value
|
||||
## (Intercept) 23.5518 0.6385 36.88
|
||||
## week -2.3757 0.1350 -17.60
|
||||
##
|
||||
## Correlation of Fixed Effects:
|
||||
## (Intr)
|
||||
## week -0.524
|
||||
|
||||
## Random-slope model
|
||||
|
||||
$$
|
||||
\begin{aligned}
|
||||
Y_{ij} &= \beta_0 + \beta_1 \, \mathtt{week}_{ij}
|
||||
+ \upsilon_{0i} + \upsilon_{1i}\, \mathtt{week}_{ij}
|
||||
+ \varepsilon_{ij} \\
|
||||
\begin{pmatrix} \upsilon_{0i}\\ \upsilon_{1i} \end{pmatrix} &\sim
|
||||
N \left(\begin{pmatrix} 0\\ 0 \end{pmatrix}, \,
|
||||
\mathbf{\Sigma}_\upsilon =
|
||||
\begin{pmatrix}
|
||||
\sigma^2_{\upsilon_0} & \sigma_{\upsilon_0 \upsilon_1} \\
|
||||
\sigma_{\upsilon_0 \upsilon_1} & \sigma^2_{\upsilon_1} \\
|
||||
\end{pmatrix} \right)
|
||||
\text{ i.i.d.} \\
|
||||
\mathbf{\varepsilon}_i &\sim N(\mathbf{0}, \, \sigma^2 \mathbf{I}_{n_i})
|
||||
\text{ i.i.d.} \\
|
||||
i &= 1, \ldots, I, \quad j = 1, \ldots n_i
|
||||
\end{aligned}
|
||||
$$
|
||||
|
||||
``` r
|
||||
m2 <- lmer(hamd ~ week + (week | id), data = dat, REML = FALSE)
|
||||
summary(m2)
|
||||
```
|
||||
|
||||
## Linear mixed model fit by maximum likelihood ['lmerMod']
|
||||
## Formula: hamd ~ week + (week | id)
|
||||
## Data: dat
|
||||
##
|
||||
## AIC BIC logLik -2*log(L) df.resid
|
||||
## 2231.0 2254.6 -1109.5 2219.0 369
|
||||
##
|
||||
## Scaled residuals:
|
||||
## Min 1Q Median 3Q Max
|
||||
## -2.7460 -0.5016 0.0332 0.5177 3.6834
|
||||
##
|
||||
## Random effects:
|
||||
## Groups Name Variance Std.Dev. Corr
|
||||
## id (Intercept) 12.631 3.554
|
||||
## week 2.079 1.442 -0.28
|
||||
## Residual 12.216 3.495
|
||||
## Number of obs: 375, groups: id, 66
|
||||
##
|
||||
## Fixed effects:
|
||||
## Estimate Std. Error t value
|
||||
## (Intercept) 23.5769 0.5456 43.22
|
||||
## week -2.3771 0.2087 -11.39
|
||||
##
|
||||
## Correlation of Fixed Effects:
|
||||
## (Intr)
|
||||
## week -0.449
|
||||
|
||||
## Partial pooling
|
||||
|
||||
``` r
|
||||
indiv <- unlist(
|
||||
sapply(unique(dat$id),
|
||||
function(i) predict(lm(hamd ~ week, dat[dat$id == i, ])))
|
||||
)
|
||||
|
||||
xyplot(hamd + predict(m2, re.form = ~ 0) + predict(m2) + indiv ~ week | id,
|
||||
data = dat, type = c("p", "l", "l", "l"), pch = 16, grid = TRUE,
|
||||
distribute.type = TRUE, layout = c(11, 6), ylab = "HDRS score",
|
||||
xlab = "Time (week)",
|
||||
# customize colors
|
||||
col = c("#434F4F", "#3CB4DC", "#FF6900", "#78004B"),
|
||||
# add legend
|
||||
key = list(space = "top", columns = 3,
|
||||
text = list(c("Population", "Mixed model", "Within-subject")),
|
||||
lines = list(col = c("#3CB4DC", "#FF6900", "#78004B")))
|
||||
)
|
||||
```
|
||||
|
||||
<!-- -->
|
||||
|
||||
## By-group random-slope model
|
||||
|
||||
``` r
|
||||
m3 <- lmer(hamd ~ week + diag + (week | id), data = dat, REML = FALSE)
|
||||
m4 <- lmer(hamd ~ week * diag + (week | id), data = dat, REML = FALSE)
|
||||
anova(m3, m4)
|
||||
```
|
||||
|
||||
## Data: dat
|
||||
## Models:
|
||||
## m3: hamd ~ week + diag + (week | id)
|
||||
## m4: hamd ~ week * diag + (week | id)
|
||||
## npar AIC BIC logLik -2*log(L) Chisq Df Pr(>Chisq)
|
||||
## m3 7 2228.9 2256.4 -1107.5 2214.9
|
||||
## m4 8 2230.9 2262.3 -1107.5 2214.9 0.0042 1 0.9486
|
||||
|
||||
## Means and predicted HDRS score by group
|
||||
|
||||
``` r
|
||||
dat2 <- aggregate(hamd ~ week + diag, dat, mean)
|
||||
dat2$m4 <- predict(m4, newdata = dat2, re.form = ~ 0)
|
||||
|
||||
plot(m4 ~ week, dat2[dat2$diag == "endog", ], type = "l",
|
||||
ylim=c(0, 28), xlab="Week", ylab = "HDRS score")
|
||||
lines(m4 ~ week, dat2[dat2$diag == "nonen", ], lty = 2)
|
||||
points(hamd ~ week, dat2[dat2$diag == "endog", ], pch = 16)
|
||||
points(hamd ~ week, dat2[dat2$diag == "nonen", ], pch = 21, bg = "white")
|
||||
legend("topright", c("Endogenous", "Non endogenous"),
|
||||
lty = 1:2, pch = c(16, 21), pt.bg = "white", bty = "n")
|
||||
```
|
||||
|
||||
<!-- -->
|
||||
|
||||
## Gory details
|
||||
|
||||
``` r
|
||||
fixef(m4)
|
||||
```
|
||||
|
||||
## (Intercept) week diagendog week:diagendog
|
||||
## 22.47626332 -2.36568746 1.98802087 -0.02705576
|
||||
|
||||
``` r
|
||||
getME(m4, "theta")
|
||||
```
|
||||
|
||||
## id.(Intercept) id.week.(Intercept) id.week
|
||||
## 0.9760823 -0.1175194 0.3951992
|
||||
|
||||
``` r
|
||||
t(chol(VarCorr(m4)$id))[lower.tri(diag(2), diag = TRUE)] / sigma(m4)
|
||||
```
|
||||
|
||||
## [1] 0.9760823 -0.1175194 0.3951992
|
||||
|
||||
# Power simulation
|
||||
|
||||
## Setup
|
||||
|
||||
``` r
|
||||
## Study design and sample sizes
|
||||
n_week <- 6
|
||||
n_subj <- 80
|
||||
n <- n_week * n_subj
|
||||
|
||||
dat <- data.frame(
|
||||
id = factor(rep(seq_len(n_subj), each = n_week)),
|
||||
week = rep(0:(n_week - 1), times = n_subj),
|
||||
treat = factor(rep(0:1, each = n/2), labels = c("ctr", "trt"))
|
||||
)
|
||||
|
||||
## Fixed effects and variance components
|
||||
beta <- c(22, -2, 0, -1)
|
||||
Su <- matrix(c(12, -1.5, -1.5, 2), nrow = 2)
|
||||
se <- 3.5
|
||||
```
|
||||
|
||||
## Power
|
||||
|
||||
``` r
|
||||
pval <- replicate(200, {
|
||||
|
||||
# Data generation
|
||||
means <- model.matrix( ~ week * treat, dat) %*% beta
|
||||
ranu <- MASS::mvrnorm(n_subj, mu = c(0, 0), Sigma = Su)
|
||||
e <- rnorm(n_subj * n_week, mean = 0, sd = se)
|
||||
|
||||
y <- means + ranu[dat$id, 1] + ranu[dat$id, 2] * dat$week + e
|
||||
|
||||
# Fitting model to test H0
|
||||
m0 <- lmer(y ~ week + treat + (1 + week | id), data = dat, REML = FALSE)
|
||||
m1 <- lmer(y ~ week * treat + (1 + week | id), data = dat, REML = FALSE)
|
||||
anova(m0, m1)["m1", "Pr(>Chisq)"]
|
||||
}
|
||||
)
|
||||
|
||||
mean(pval < 0.05)
|
||||
```
|
||||
|
||||
## [1] 0.805
|
||||
|
||||
## Parameter recovery
|
||||
|
||||
``` r
|
||||
par <- replicate(200, {
|
||||
|
||||
means <- model.matrix( ~ week * treat, dat) %*% beta
|
||||
ranu <- MASS::mvrnorm(n_subj, mu = c(0, 0), Sigma = Su)
|
||||
e <- rnorm(n_subj * n_week, mean = 0, sd = se)
|
||||
|
||||
y <- means + ranu[dat$id, 1] + ranu[dat$id, 2] * dat$week + e
|
||||
|
||||
m1 <- lmer(y ~ week * treat + (1 + week | id), data = dat, REML = FALSE)
|
||||
list(fixef = fixef(m1), theta = getME(m1, "theta"), sigma = sigma(m1))
|
||||
}, simplify = FALSE
|
||||
)
|
||||
|
||||
rowMeans(sapply(par, function(x) x$fixef))
|
||||
```
|
||||
|
||||
## (Intercept) week treattrt week:treattrt
|
||||
## 22.00069056 -1.98276116 0.03001846 -1.03298775
|
||||
|
||||
``` r
|
||||
rowMeans(sapply(par, function(x) x$theta))
|
||||
```
|
||||
|
||||
## id.(Intercept) id.week.(Intercept) id.week
|
||||
## 0.9767601 -0.1180799 0.3763657
|
||||
|
||||
``` r
|
||||
mean(sapply(par, function(x) x$sigma))
|
||||
```
|
||||
|
||||
## [1] 3.479607
|
||||
|
||||
``` r
|
||||
beta
|
||||
```
|
||||
|
||||
## [1] 22 -2 0 -1
|
||||
|
||||
``` r
|
||||
Lt <- chol(Su)
|
||||
t(Lt)[lower.tri(Lt, diag = TRUE)] / se
|
||||
```
|
||||
|
||||
## [1] 0.9897433 -0.1237179 0.3846546
|
||||
|
||||
``` r
|
||||
se
|
||||
```
|
||||
|
||||
## [1] 3.5
|
||||
|
||||
### References
|
||||
|
||||
<div id="refs" class="references csl-bib-body hanging-indent">
|
||||
|
||||
<div id="ref-ReisbyGram77" class="csl-entry">
|
||||
|
||||
Reisby, N., L. F. Gram, P. Bech, A. Nagy, G. O. Petersen, J. Ortmann, I.
|
||||
Ibsen, et al. 1977. “Imipramine: Clinical Effects and Pharmacokinetic
|
||||
Variability.” *Psychopharmacology* 54: 263–72.
|
||||
<https://doi.org/10.1007/BF00426574>.
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
BIN
05_mixed1/powersim-lmm_files/figure-gfm/unnamed-chunk-2-1.png
Normal file
|
After Width: | Height: | Size: 16 KiB |
BIN
05_mixed1/powersim-lmm_files/figure-gfm/unnamed-chunk-5-1.png
Normal file
|
After Width: | Height: | Size: 30 KiB |
BIN
05_mixed1/powersim-lmm_files/figure-gfm/unnamed-chunk-7-1.png
Normal file
|
After Width: | Height: | Size: 4.6 KiB |
81
06_mixed2/exercises-datsimlmm.md
Normal file
@ -0,0 +1,81 @@
|
||||
Exercises: Data simulation for crossed random-effects models
|
||||
================
|
||||
|
||||
- Change the data simulation by Baayen, Davidson, and Bates (2008) for
|
||||
$N = 30$ subjects instead of only 3
|
||||
- You can use the following script and adjust it accordingly
|
||||
- You can choose if you want to use model matrices or create the vectors
|
||||
“manually”
|
||||
|
||||
``` r
|
||||
library(lattice)
|
||||
library(lme4)
|
||||
|
||||
#--------------- (1) Create data frame ----------------------------------------
|
||||
datsim <- expand.grid(subject = factor(c("s1" , "s2" , "s3" )),
|
||||
item = factor(c("w1" , "w2" , "w3" )),
|
||||
soa = factor(c("long" , "short" ))) |>
|
||||
sort_by(~ subject)
|
||||
|
||||
#--------------- (2) Define parameters ----------------------------------------
|
||||
beta0 <- 522.22
|
||||
beta1 <- -19
|
||||
|
||||
sw <- 21
|
||||
sy0 <- 24
|
||||
sy1 <- 7
|
||||
ry <- -0.7
|
||||
se <- 9
|
||||
|
||||
#--------------- (3) Create vectors and simulate data -------------------------
|
||||
# Fixed effects
|
||||
b0 <- rep(beta0, 18)
|
||||
b1 <- rep(rep(c(0, beta1), each = 3), 3)
|
||||
|
||||
# Draw random effects
|
||||
w <- rep(rnorm(3, mean = 0, sd = sw), 6)
|
||||
e <- rnorm(18, mean = 0, sd = se)
|
||||
|
||||
# Bivariate normal distribution
|
||||
sig <- matrix(c(sy0^2, ry * sy0 * sy1, ry * sy0 * sy1, sy1^2), 2, 2)
|
||||
y01 <- MASS::mvrnorm(3, mu = c(0, 0), Sigma = sig)
|
||||
y0 <- rep(y01[,1], each = 6)
|
||||
y1 <- rep(c(0, y01[1,2],
|
||||
0, y01[2,2],
|
||||
0, y01[3,2]), each = 3)
|
||||
|
||||
datsim$rt <- b0 + b1 + w + y0 + y1 + e
|
||||
|
||||
#--------------- (4) Simulate data using model matrices -----------------------
|
||||
X <- model.matrix( ~ soa, datsim)
|
||||
Z <- model.matrix( ~ 0 + item + subject + subject:soa, datsim,
|
||||
contrasts.arg = list(subject = contrasts(datsim$subject,
|
||||
contrasts = FALSE)))
|
||||
|
||||
# Fixed effects
|
||||
beta <- c(beta0, beta1)
|
||||
# Random effects
|
||||
u <- c(w = unique(w),
|
||||
y0 = y01[,1],
|
||||
y1 = y01[,2])
|
||||
|
||||
datsim$rt2 <- X %*% beta + Z %*% u + e
|
||||
|
||||
#--------------- (5) Visualize simulated data ---------------------------------
|
||||
xyplot(rt ~ soa | subject, datsim, group = item, type = "b", layout = c(3, 1))
|
||||
```
|
||||
|
||||
### Reference
|
||||
|
||||
<div id="refs" class="references csl-bib-body hanging-indent">
|
||||
|
||||
<div id="ref-Baayen08" class="csl-entry">
|
||||
|
||||
Baayen, R. H., D. J. Davidson, and D. M. Bates. 2008. “Mixed-Effects
|
||||
Modeling with Crossed Random Effects for Subjects and Items.” *Journal
|
||||
of Memory and Language* 59 (4): 390–412.
|
||||
<https://doi.org/10.1016/j.jml.2007.12.005>.
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
67
06_mixed2/exercises.md
Normal file
@ -0,0 +1,67 @@
|
||||
Exercise: Power simulation for LMM
|
||||
================
|
||||
|
||||
## Physical healing as a function of perceived time
|
||||
|
||||
Aungle and Langer (2023) investigate how perceived time influences
|
||||
physical healing
|
||||
|
||||
- They used cupping to induce bruises on 33 subjects, then took a
|
||||
picture, waited for 28 min and took another picture
|
||||
- Subjective time was manipulated to feel like 14, 28, or 56 min
|
||||
- The pre and post pictures were presented to 25 raters who rated the
|
||||
amount of healing on a 10-point-scale with 0 = not at all healed, 5 =
|
||||
somewhat healed, 10 = completely healed
|
||||
- Subjects participated in all three conditions over a two week period
|
||||
|
||||
Data: [healing.RData](../data/healing.RData)
|
||||
|
||||
``` r
|
||||
load("../data/healing.RData")
|
||||
|
||||
str(dat)
|
||||
|
||||
# Subject ID
|
||||
dat$Subject <- factor(dat$Subject)
|
||||
# Rater ID
|
||||
dat$ResponseId <- factor(dat$ResponseId)
|
||||
```
|
||||
|
||||
1. Visualize the data.
|
||||
|
||||
- Aggregate the data over Raters and plot the data for each subject
|
||||
using `lattice::xyplot()`
|
||||
- Aggregate the data over Subjects and plot one panel for each rater
|
||||
- How would you choose the random effects for a model testing
|
||||
healing over the three conditions
|
||||
|
||||
2. Fit the model you think fits the experimental design best
|
||||
|
||||
3. Test the effects of condition
|
||||
|
||||
4. Run a power simulation for a replication study:
|
||||
|
||||
- Set up a data frame containing the study design and sample size.
|
||||
|
||||
- Specify the minimum relevant effects.
|
||||
|
||||
- Set the fixed effects and variance components to plausible values.
|
||||
|
||||
- How many participants are required to detect the specified effect
|
||||
with a power of 80%?
|
||||
|
||||
- Recover the parameters of the model for one simulated data set.
|
||||
|
||||
### Reference
|
||||
|
||||
<div id="refs" class="references csl-bib-body hanging-indent">
|
||||
|
||||
<div id="ref-Aungle23" class="csl-entry">
|
||||
|
||||
Aungle, P., and E. Langer. 2023. “Physical Healing as a Function of
|
||||
Perceived Time.” *Scientific Reports* 13 (1): 22432.
|
||||
<https://doi.org/10.1038/s41598-023-50009-3>.
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
BIN
06_mixed2/intro-datsimlmm.pdf
Normal file
51
README.md
Normal file
@ -0,0 +1,51 @@
|
||||
Power simulations
|
||||
================
|
||||
Nora Wickelmaier
|
||||
January, 21-22, 2026
|
||||
|
||||
## Schedule
|
||||
|
||||
| Day | Time | Topic | Exercises |
|
||||
|:----|:------------|:-------------------------------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Wed | 9:00–10:30 | [Simulation-based power analysis](01_intro/powersim-intro.pdf) | [Binomial test power simulation](01_intro/exercises.md) |
|
||||
| | 10:45–12:15 | [Power (curves) for t-tests](02_ttest/powersim-ttest.md) | [Temporal value asymmetry](02_ttest/exercises-ttest1.md), [Directed reading activities](02_ttest/exercises-ttest2.md) |
|
||||
| | 13:00–14:00 | [Power for ANOVAs](03_anova/powersim-anova.md) | [Testing the interaction in two-by-two ANOVA](03_anova/exercises-anova.md) |
|
||||
| Thu | 9:00–10:30 | [Power for baseline/follow-up measurements](04_ancova/intro-ancova.pdf) | [Shoulder pain and acupuncture](04_ancova/exercises-ancova1.md), [MASS anorexia data](04_ancova/exercises-ancova2.md) |
|
||||
| | 10:45–12:15 | [Introduction to LMMs](05_mixed1/intro-lmm.pdf), [Power for longitudinal data analysis](05_mixed1/powersim-lmm.md) | [Risperidone vs. haloperidol and schizophrenia](05_mixed1/exercises-pwrlmm.md) |
|
||||
| | 13:00–14:00 | [Data simulation for crossed random-effects models](06_mixed2/intro-datsimlmm.pdf) | [Data simulation for crossed random-effects models](06_mixed2/exercises-datsimlmm.md), [Physical healing](06_mixed2/exercises.md) |
|
||||
|
||||
## Content
|
||||
|
||||
- Power and the significance filter
|
||||
- Simulation-based power analysis with R
|
||||
- Drawing power curves
|
||||
- Power for t-tests, ANOVA, ANCOVA, and mixed-effects models
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Introductory knowledge of statistics
|
||||
- Basic knowledge of R
|
||||
|
||||
## Software
|
||||
|
||||
Participants will need to have installed:
|
||||
|
||||
- A current R version (<https://CRAN.R-project.org/>)
|
||||
- An IDE for R (like RStudio or VSCode) or a text editor with syntax
|
||||
highlighting (like Vim or Notepad++)
|
||||
- Additional R package:
|
||||
- lme4: <https://CRAN.R-project.org/package=lme4>
|
||||
|
||||
## Literature
|
||||
|
||||
<div id="refs" class="references csl-bib-body hanging-indent">
|
||||
|
||||
<div id="ref-Wickelmaier22" class="csl-entry">
|
||||
|
||||
Wickelmaier, F. 2022. “Simulating the Power of Statistical Tests: A
|
||||
Collection of R Examples.” *ArXiv*.
|
||||
<https://doi.org/10.48550/arXiv.2110.09836>.
|
||||
|
||||
</div>
|
||||
|
||||
</div>
|
||||
BIN
data/healing.RData
Normal file
57
data/kleinhenz.txt
Normal file
@ -0,0 +1,57 @@
|
||||
# Reference: Kleinhenz et al., Pain 83 (1999) 235-241
|
||||
# Source: Vickers, A. J., & Altman, D. G., BMJ 323 (2001) 1123-1124
|
||||
|
||||
pre post grp
|
||||
35 35 plac
|
||||
54 37 plac
|
||||
35 40 plac
|
||||
30 42 plac
|
||||
44 45 plac
|
||||
49 47 plac
|
||||
38 51 plac
|
||||
52 52 plac
|
||||
52 52 plac
|
||||
39 53 plac
|
||||
44 53 plac
|
||||
53 53 plac
|
||||
73 53 plac
|
||||
48 54 plac
|
||||
58 54 plac
|
||||
55 57 plac
|
||||
73 74 plac
|
||||
65 76 plac
|
||||
78 78 plac
|
||||
80 80 plac
|
||||
59 81 plac
|
||||
52 81 plac
|
||||
46 83 plac
|
||||
57 85 plac
|
||||
44 85 plac
|
||||
63 86 plac
|
||||
80 95 plac
|
||||
43 74 acu
|
||||
75 99 acu
|
||||
66 88 acu
|
||||
65 85 acu
|
||||
74 100 acu
|
||||
58 58 acu
|
||||
62 77 acu
|
||||
64 100 acu
|
||||
59 61 acu
|
||||
45 83 acu
|
||||
70 78 acu
|
||||
66 100 acu
|
||||
59 100 acu
|
||||
68 94 acu
|
||||
73 60 acu
|
||||
60 92 acu
|
||||
41 41 acu
|
||||
53 53 acu
|
||||
62 80 acu
|
||||
74 83 acu
|
||||
54 57 acu
|
||||
40 67 acu
|
||||
32 82 acu
|
||||
78 78 acu
|
||||
69 100 acu
|
||||
|
||||
2010
data/moeller.csv
Normal file
399
data/reisby.txt
Normal file
@ -0,0 +1,399 @@
|
||||
## Data from Reisby et al., Psychopharmacology 54 (1977) 263-272
|
||||
id hamd week diag endweek
|
||||
101 26 0 nonen 0
|
||||
101 22 1 nonen 0
|
||||
101 18 2 nonen 0
|
||||
101 7 3 nonen 0
|
||||
101 4 4 nonen 0
|
||||
101 3 5 nonen 0
|
||||
103 33 0 nonen 0
|
||||
103 24 1 nonen 0
|
||||
103 15 2 nonen 0
|
||||
103 24 3 nonen 0
|
||||
103 15 4 nonen 0
|
||||
103 13 5 nonen 0
|
||||
104 29 0 endog 0
|
||||
104 22 1 endog 1
|
||||
104 18 2 endog 2
|
||||
104 13 3 endog 3
|
||||
104 19 4 endog 4
|
||||
104 0 5 endog 5
|
||||
105 22 0 nonen 0
|
||||
105 12 1 nonen 0
|
||||
105 16 2 nonen 0
|
||||
105 16 3 nonen 0
|
||||
105 13 4 nonen 0
|
||||
105 9 5 nonen 0
|
||||
106 21 0 endog 0
|
||||
106 25 1 endog 1
|
||||
106 23 2 endog 2
|
||||
106 18 3 endog 3
|
||||
106 20 4 endog 4
|
||||
106 NA 5 endog 5
|
||||
107 21 0 endog 0
|
||||
107 21 1 endog 1
|
||||
107 16 2 endog 2
|
||||
107 19 3 endog 3
|
||||
107 NA 4 endog 4
|
||||
107 6 5 endog 5
|
||||
108 21 0 endog 0
|
||||
108 22 1 endog 1
|
||||
108 11 2 endog 2
|
||||
108 9 3 endog 3
|
||||
108 9 4 endog 4
|
||||
108 7 5 endog 5
|
||||
113 21 0 nonen 0
|
||||
113 23 1 nonen 0
|
||||
113 19 2 nonen 0
|
||||
113 23 3 nonen 0
|
||||
113 23 4 nonen 0
|
||||
113 NA 5 nonen 0
|
||||
114 NA 0 nonen 0
|
||||
114 17 1 nonen 0
|
||||
114 11 2 nonen 0
|
||||
114 13 3 nonen 0
|
||||
114 7 4 nonen 0
|
||||
114 7 5 nonen 0
|
||||
115 NA 0 endog 0
|
||||
115 16 1 endog 1
|
||||
115 16 2 endog 2
|
||||
115 16 3 endog 3
|
||||
115 16 4 endog 4
|
||||
115 11 5 endog 5
|
||||
117 19 0 endog 0
|
||||
117 16 1 endog 1
|
||||
117 13 2 endog 2
|
||||
117 12 3 endog 3
|
||||
117 7 4 endog 4
|
||||
117 6 5 endog 5
|
||||
118 NA 0 endog 0
|
||||
118 26 1 endog 1
|
||||
118 18 2 endog 2
|
||||
118 18 3 endog 3
|
||||
118 14 4 endog 4
|
||||
118 11 5 endog 5
|
||||
120 20 0 nonen 0
|
||||
120 19 1 nonen 0
|
||||
120 17 2 nonen 0
|
||||
120 18 3 nonen 0
|
||||
120 16 4 nonen 0
|
||||
120 17 5 nonen 0
|
||||
121 20 0 nonen 0
|
||||
121 22 1 nonen 0
|
||||
121 19 2 nonen 0
|
||||
121 19 3 nonen 0
|
||||
121 12 4 nonen 0
|
||||
121 14 5 nonen 0
|
||||
123 15 0 nonen 0
|
||||
123 15 1 nonen 0
|
||||
123 15 2 nonen 0
|
||||
123 13 3 nonen 0
|
||||
123 5 4 nonen 0
|
||||
123 5 5 nonen 0
|
||||
501 29 0 endog 0
|
||||
501 30 1 endog 1
|
||||
501 26 2 endog 2
|
||||
501 22 3 endog 3
|
||||
501 19 4 endog 4
|
||||
501 24 5 endog 5
|
||||
502 21 0 endog 0
|
||||
502 22 1 endog 1
|
||||
502 13 2 endog 2
|
||||
502 11 3 endog 3
|
||||
502 2 4 endog 4
|
||||
502 1 5 endog 5
|
||||
504 19 0 nonen 0
|
||||
504 17 1 nonen 0
|
||||
504 15 2 nonen 0
|
||||
504 16 3 nonen 0
|
||||
504 12 4 nonen 0
|
||||
504 12 5 nonen 0
|
||||
505 21 0 nonen 0
|
||||
505 11 1 nonen 0
|
||||
505 18 2 nonen 0
|
||||
505 0 3 nonen 0
|
||||
505 0 4 nonen 0
|
||||
505 4 5 nonen 0
|
||||
507 27 0 endog 0
|
||||
507 26 1 endog 1
|
||||
507 26 2 endog 2
|
||||
507 25 3 endog 3
|
||||
507 24 4 endog 4
|
||||
507 19 5 endog 5
|
||||
603 28 0 nonen 0
|
||||
603 22 1 nonen 0
|
||||
603 18 2 nonen 0
|
||||
603 20 3 nonen 0
|
||||
603 11 4 nonen 0
|
||||
603 13 5 nonen 0
|
||||
604 27 0 nonen 0
|
||||
604 27 1 nonen 0
|
||||
604 13 2 nonen 0
|
||||
604 5 3 nonen 0
|
||||
604 7 4 nonen 0
|
||||
604 NA 5 nonen 0
|
||||
606 19 0 endog 0
|
||||
606 33 1 endog 1
|
||||
606 12 2 endog 2
|
||||
606 12 3 endog 3
|
||||
606 3 4 endog 4
|
||||
606 1 5 endog 5
|
||||
607 30 0 endog 0
|
||||
607 39 1 endog 1
|
||||
607 30 2 endog 2
|
||||
607 27 3 endog 3
|
||||
607 20 4 endog 4
|
||||
607 4 5 endog 5
|
||||
608 24 0 nonen 0
|
||||
608 19 1 nonen 0
|
||||
608 14 2 nonen 0
|
||||
608 12 3 nonen 0
|
||||
608 3 4 nonen 0
|
||||
608 4 5 nonen 0
|
||||
609 NA 0 endog 1
|
||||
609 25 1 endog 1
|
||||
609 22 2 endog 2
|
||||
609 14 3 endog 3
|
||||
609 15 4 endog 4
|
||||
609 2 5 endog 5
|
||||
610 34 0 endog 0
|
||||
610 NA 1 endog 1
|
||||
610 33 2 endog 2
|
||||
610 23 3 endog 3
|
||||
610 NA 4 endog 4
|
||||
610 11 5 endog 5
|
||||
302 18 0 endog 0
|
||||
302 22 1 endog 1
|
||||
302 16 2 endog 2
|
||||
302 8 3 endog 3
|
||||
302 9 4 endog 4
|
||||
302 12 5 endog 5
|
||||
303 21 0 nonen 0
|
||||
303 21 1 nonen 0
|
||||
303 13 2 nonen 0
|
||||
303 14 3 nonen 0
|
||||
303 10 4 nonen 0
|
||||
303 5 5 nonen 0
|
||||
304 21 0 endog 0
|
||||
304 27 1 endog 1
|
||||
304 29 2 endog 2
|
||||
304 NA 3 endog 3
|
||||
304 12 4 endog 4
|
||||
304 24 5 endog 5
|
||||
305 19 0 nonen 0
|
||||
305 17 1 nonen 0
|
||||
305 15 2 nonen 0
|
||||
305 11 3 nonen 0
|
||||
305 5 4 nonen 0
|
||||
305 1 5 nonen 0
|
||||
308 22 0 nonen 0
|
||||
308 21 1 nonen 0
|
||||
308 18 2 nonen 0
|
||||
308 17 3 nonen 0
|
||||
308 12 4 nonen 0
|
||||
308 11 5 nonen 0
|
||||
309 22 0 nonen 0
|
||||
309 22 1 nonen 0
|
||||
309 16 2 nonen 0
|
||||
309 19 3 nonen 0
|
||||
309 20 4 nonen 0
|
||||
309 11 5 nonen 0
|
||||
310 24 0 endog 0
|
||||
310 19 1 endog 1
|
||||
310 11 2 endog 2
|
||||
310 7 3 endog 3
|
||||
310 6 4 endog 4
|
||||
310 NA 5 endog 5
|
||||
311 20 0 endog 0
|
||||
311 16 1 endog 1
|
||||
311 21 2 endog 2
|
||||
311 17 3 endog 3
|
||||
311 NA 4 endog 4
|
||||
311 15 5 endog 5
|
||||
312 17 0 endog 0
|
||||
312 NA 1 endog 1
|
||||
312 18 2 endog 2
|
||||
312 17 3 endog 3
|
||||
312 17 4 endog 4
|
||||
312 6 5 endog 5
|
||||
313 21 0 nonen 0
|
||||
313 19 1 nonen 0
|
||||
313 10 2 nonen 0
|
||||
313 11 3 nonen 0
|
||||
313 11 4 nonen 0
|
||||
313 8 5 nonen 0
|
||||
315 27 0 endog 0
|
||||
315 21 1 endog 1
|
||||
315 17 2 endog 2
|
||||
315 13 3 endog 3
|
||||
315 5 4 endog 4
|
||||
315 NA 5 endog 5
|
||||
316 32 0 endog 0
|
||||
316 26 1 endog 1
|
||||
316 23 2 endog 2
|
||||
316 26 3 endog 3
|
||||
316 23 4 endog 4
|
||||
316 24 5 endog 5
|
||||
318 17 0 endog 0
|
||||
318 18 1 endog 1
|
||||
318 19 2 endog 2
|
||||
318 21 3 endog 3
|
||||
318 17 4 endog 4
|
||||
318 11 5 endog 5
|
||||
319 24 0 endog 0
|
||||
319 18 1 endog 1
|
||||
319 10 2 endog 2
|
||||
319 14 3 endog 3
|
||||
319 13 4 endog 4
|
||||
319 12 5 endog 5
|
||||
322 28 0 endog 0
|
||||
322 21 1 endog 1
|
||||
322 25 2 endog 2
|
||||
322 32 3 endog 3
|
||||
322 34 4 endog 4
|
||||
322 NA 5 endog 5
|
||||
327 17 0 nonen 0
|
||||
327 18 1 nonen 0
|
||||
327 15 2 nonen 0
|
||||
327 8 3 nonen 0
|
||||
327 19 4 nonen 0
|
||||
327 17 5 nonen 0
|
||||
328 22 0 nonen 0
|
||||
328 24 1 nonen 0
|
||||
328 28 2 nonen 0
|
||||
328 26 3 nonen 0
|
||||
328 28 4 nonen 0
|
||||
328 29 5 nonen 0
|
||||
331 19 0 nonen 0
|
||||
331 21 1 nonen 0
|
||||
331 18 2 nonen 0
|
||||
331 16 3 nonen 0
|
||||
331 14 4 nonen 0
|
||||
331 10 5 nonen 0
|
||||
333 23 0 nonen 0
|
||||
333 20 1 nonen 0
|
||||
333 21 2 nonen 0
|
||||
333 20 3 nonen 0
|
||||
333 24 4 nonen 0
|
||||
333 14 5 nonen 0
|
||||
334 31 0 nonen 0
|
||||
334 25 1 nonen 0
|
||||
334 NA 2 nonen 0
|
||||
334 7 3 nonen 0
|
||||
334 8 4 nonen 0
|
||||
334 11 5 nonen 0
|
||||
335 21 0 nonen 0
|
||||
335 21 1 nonen 0
|
||||
335 18 2 nonen 0
|
||||
335 15 3 nonen 0
|
||||
335 12 4 nonen 0
|
||||
335 10 5 nonen 0
|
||||
337 27 0 nonen 0
|
||||
337 22 1 nonen 0
|
||||
337 23 2 nonen 0
|
||||
337 21 3 nonen 0
|
||||
337 12 4 nonen 0
|
||||
337 13 5 nonen 0
|
||||
338 22 0 nonen 0
|
||||
338 20 1 nonen 0
|
||||
338 22 2 nonen 0
|
||||
338 23 3 nonen 0
|
||||
338 19 4 nonen 0
|
||||
338 18 5 nonen 0
|
||||
339 27 0 endog 0
|
||||
339 NA 1 endog 1
|
||||
339 14 2 endog 2
|
||||
339 12 3 endog 3
|
||||
339 11 4 endog 4
|
||||
339 12 5 endog 5
|
||||
344 NA 0 endog 0
|
||||
344 21 1 endog 1
|
||||
344 12 2 endog 2
|
||||
344 13 3 endog 3
|
||||
344 13 4 endog 4
|
||||
344 18 5 endog 5
|
||||
345 29 0 nonen 0
|
||||
345 27 1 nonen 0
|
||||
345 27 2 nonen 0
|
||||
345 22 3 nonen 0
|
||||
345 22 4 nonen 0
|
||||
345 23 5 nonen 0
|
||||
346 25 0 endog 0
|
||||
346 24 1 endog 1
|
||||
346 19 2 endog 2
|
||||
346 23 3 endog 3
|
||||
346 14 4 endog 4
|
||||
346 21 5 endog 5
|
||||
347 18 0 endog 0
|
||||
347 15 1 endog 1
|
||||
347 14 2 endog 2
|
||||
347 10 3 endog 3
|
||||
347 8 4 endog 4
|
||||
347 NA 5 endog 5
|
||||
348 24 0 nonen 0
|
||||
348 21 1 nonen 0
|
||||
348 12 2 nonen 0
|
||||
348 13 3 nonen 0
|
||||
348 12 4 nonen 0
|
||||
348 5 5 nonen 0
|
||||
349 17 0 endog 0
|
||||
349 19 1 endog 1
|
||||
349 15 2 endog 2
|
||||
349 12 3 endog 3
|
||||
349 9 4 endog 4
|
||||
349 13 5 endog 5
|
||||
350 22 0 nonen 0
|
||||
350 25 1 nonen 0
|
||||
350 12 2 nonen 0
|
||||
350 16 3 nonen 0
|
||||
350 10 4 nonen 0
|
||||
350 16 5 nonen 0
|
||||
351 30 0 endog 0
|
||||
351 27 1 endog 1
|
||||
351 23 2 endog 2
|
||||
351 20 3 endog 3
|
||||
351 12 4 endog 4
|
||||
351 11 5 endog 5
|
||||
352 21 0 endog 0
|
||||
352 19 1 endog 1
|
||||
352 18 2 endog 2
|
||||
352 15 3 endog 3
|
||||
352 18 4 endog 4
|
||||
352 19 5 endog 5
|
||||
353 27 0 endog 0
|
||||
353 21 1 endog 1
|
||||
353 24 2 endog 2
|
||||
353 22 3 endog 3
|
||||
353 16 4 endog 4
|
||||
353 11 5 endog 5
|
||||
354 28 0 endog 0
|
||||
354 27 1 endog 1
|
||||
354 27 2 endog 2
|
||||
354 26 3 endog 3
|
||||
354 23 4 endog 4
|
||||
354 NA 5 endog 5
|
||||
355 22 0 endog 0
|
||||
355 26 1 endog 1
|
||||
355 20 2 endog 2
|
||||
355 13 3 endog 3
|
||||
355 10 4 endog 4
|
||||
355 7 5 endog 5
|
||||
357 27 0 endog 0
|
||||
357 22 1 endog 1
|
||||
357 24 2 endog 2
|
||||
357 25 3 endog 3
|
||||
357 19 4 endog 4
|
||||
357 19 5 endog 5
|
||||
360 21 0 endog 0
|
||||
360 28 1 endog 1
|
||||
360 27 2 endog 2
|
||||
360 29 3 endog 3
|
||||
360 28 4 endog 4
|
||||
360 33 5 endog 5
|
||||
361 30 0 endog 0
|
||||
361 22 1 endog 1
|
||||
361 11 2 endog 2
|
||||
361 8 3 endog 3
|
||||
361 7 4 endog 4
|
||||
361 19 5 endog 5
|
||||
|
||||