In this post I show how to do simulated-based power analyses that produce a power curve: the obtained power for a range of sample sizes.
A curious thing happened in the field of social psychology: Social psychologists finally realized that statistical power is important. Unfortunately, they then skipped the step of figuring out how to do them correctly. Here I list some papers on power analyses that I hope help in improving the way we do them.
A post on how to predict values of intercept-only models.
A blog post on the metalog distribution using the rmetalog package.
In a recent tweet I asked the question why we use n - 1 to calculate the variance of a sample. Many people contributed an answer, but many of them were of the type I feared. Most consisted of some statistical jargon that confuses me more, rather than less. Other responses were very useful, though, so I recommend checking out the replies to the tweet. In this post, I will try to describe my favorite way of looking at the issue.
An example of how to run multiple group comparisons all at once using a single regression model.
Method sections in academic (social psychology) papers usually consist of the following sections: Participants, Design, Procedure, and Materials. They also tend to be presented in this order. But is this the right order? I don't think so.
Simulation-based power analyses make it easy to understand what power is: Power is simply counting how often you find the results you expect to find. Running simulation-based power analyses might be new for some, so in this blog post I present code to simulate data for a range of different scenarios.
The second of a series of tutorial posts on Bayesian analyses. In this post I focus on using brms to run a regression with a single predictor.
The third of a series of tutorial posts on Bayesian analyses. In this post I focus on using brms to model a correlation.
The fourth of a series of tutorial posts on Bayesian analyses. In this post I focus on using brms to model the difference between two groups.
The first of a series of tutorial posts on Bayesian analyses. In this post I focus on using brms to run an intercept-only regression model.
This is the first post in a series on understanding regression. In this first post we focus on what the main question is that we should be asking when using regression.
In Part 1, we proposed that regression is about choosing and fitting distributions to data. In this post, we explain why the normal distribution is often the right choice.
We've chosen the normal distribution to describe heights. Now we need to estimate its parameters μ and σ from our sample. This post shows that lm() does exactly this: the intercept estimates μ (the mean) and the residual standard error estimates σ.
Every estimate is based on a sample, and a different sample would give different results. This post builds intuition for what that means: simulating how much the sample mean would vary across studies and deriving the standard error formula.