Every estimate is based on a sample, and a different sample would give different results. This post builds intuition for what that means: simulating how much the sample mean would vary across studies and deriving the standard error formula.
We've chosen the normal distribution to describe heights. Now we need to estimate its parameters μ and σ from our sample. This post shows that lm() does exactly this: the intercept estimates μ (the mean) and the residual standard error estimates σ.
In Part 1, we proposed that regression is about choosing and fitting distributions to data. In this post, we explain why the normal distribution is often the right choice.
This is the first post in a series on understanding regression. In this first post we focus on what the main question is that we should be asking when using regression.
An example of how to run multiple group comparisons all at once using a single regression model.